• La Universidad
    • Historia
    • Rectoría
    • Autoridades
    • Secretaría General
    • Pastoral UC
    • Organización
    • Hechos y cifras
    • Noticias UC
  • 2011-03-15-13-28-09
  • Facultades
    • Agronomía e Ingeniería Forestal
    • Arquitectura, Diseño y Estudios Urbanos
    • Artes
    • Ciencias Biológicas
    • Ciencias Económicas y Administrativas
    • Ciencias Sociales
    • College
    • Comunicaciones
    • Derecho
    • Educación
    • Filosofía
    • Física
    • Historia, Geografía y Ciencia Política
    • Ingeniería
    • Letras
    • Matemáticas
    • Medicina
    • Química
    • Teología
    • Sede regional Villarrica
  • 2011-03-15-13-28-09
  • Organizaciones vinculadas
  • 2011-03-15-13-28-09
  • Bibliotecas
  • 2011-03-15-13-28-09
  • Mi Portal UC
  • 2011-03-15-13-28-09
  • Correo UC
- Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log in
    Log in
    Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log in
    Log in
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Barceló Baeza, Pablo"

Now showing 1 - 7 of 7
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A Uniform Language to Explain Decision Trees
    (International Joint Conferences on Artificial Intelligence (IJCAI), 2024) Arenas Saavedra, Marcelo Alejandro; Barceló Baeza, Pablo; Bustamante Henríquez, Diego Emilio; Caraball Mieri, José Thomas; Subercaseaux, Bernardo
    The formal XAI community has studied a plethora of interpretability queries aiming to understand the classifications made by decision trees. However, a more uniform understanding of what questions we can hope to answer about these models, traditionally deemed to be easily interpretable, has remained elusive. In an initial attempt to understand uniform languages for interpretability, Arenas et al. (2021) proposed FOIL, a logic for explaining black-box ML models, and showed that it can express a variety of interpretability queries. However, we show that FOIL is limited in two important senses: (i) it is not expressive enough to capture some crucial queries, and (ii) its model agnostic nature results in a high computational complexity for decision trees. In this paper, we carefully craft two fragments of first-order logic that allow for efficiently interpreting decision trees: Q-DT-FOIL and its optimization variant OPT-DT-FOIL. We show that our proposed logics can express not only a variety of interpretability queries considered by previous literature, but also elegantly allows users to specify different objectives the sought explanations should optimize for. Using finite model-theoretic techniques, we show that the different ingredients of Q-DT-FOIL are necessary for its expressiveness, and yet that queries in Q-DT-FOIL can be evaluated with a polynomial number of queries to a SAT solver, as well as their optimization versions in OPT-DT-FOIL. Besides our theoretical results, we provide a SAT-based implementation of the evaluation for OPT-DT-FOIL that is performant on industry-size decision trees.
  • Loading...
    Thumbnail Image
    Item
    Attention is Turing complete
    (2021) Pérez, Jorge; Barceló Baeza, Pablo; Marinkovic, Javier
    Alternatives to recurrent neural networks, in particular, architectures based on self-attention, are gaining momentum for processing input sequences. In spite of their relevance, the computational properties of such networks have not yet been fully explored. We study the computational power of the Transformer, one of the most paradigmatic architectures exemplifying self-attention. We show that the Transformer with hard-attention is Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. Our study also reveals some minimal sets of elements needed to obtain this completeness result.
  • Loading...
    Thumbnail Image
    Item
    Foundations of Modern Query Languages for Graph Databases
    (2017) Angles, R.; Arenas Saavedra, Marcelo Alejandro; Barceló Baeza, Pablo; Hogan, A.; Reutter de la Maza, Juan; Vrgoc, Domagoj
  • Loading...
    Thumbnail Image
    Item
    Redes neuronales para extracción de información relevante de sentencias legales
    (2023) Suárez Carbonell, Lucas Andrés; Barceló Baeza, Pablo; Pontificia Universidad Católica de Chile. Escuela de Ingeniería
    En los últimos anos el Procesamiento de Lenguaje Natural, desde ahora PLN, ha utilizado técnicas de Aprendizaje Automático para representar fragmentos de texto. La introducción de la arquitectura del Transformer (Vaswani et al., 2017) y posteriormente de BERT (Devlin et al., 2018) junto con su versión más pequeña ALBERT (Lan et al., 2019) revolucionaron el estado del arte en PLN, imponiéndose como estándar para resolver tareas que involucren el modelamiento computacional de lenguaje. Una de estas tareas corresponde a sumarización extractiva, donde el objetivo es crear un resumen de un texto dado seleccionando y extrayendo frases y oraciones clave del documento original. Una de las limitaciones que aparecen con el uso de BERT en este tipo de tareas corresponde al tamaño máximo que tienen los transformers para procesar el texto de entrada, lo que dificulta el trabajo con documentos largos. En este trabajo utilizamos BERT y otros modelos de lenguaje similares para construir un sistema que permita obtener la jurisprudencia de una sentencia legal de la Corte Suprema. Para ello, se propone una arquitectura capaz de encapsular la información en dos niveles: a nivel de bloque de texto y a nivel de documento, para luego realizar una clasificación binaria de cada una de los bloques. Para validar que el modelo propuesto es capaz de resolver la tarea se realizaron pruebas sobre el dataset de documentos legales BillSum (Kornilova & Eidelman, 2019), alcanzando resultados comparables con modelos del estado del arte en términos de ROUGE.
  • Loading...
    Thumbnail Image
    Item
    Research Directions for Principles of Data Management (Abridged)
    (2016) Abiteboul, Serge; Arenas Saavedra, Marcelo Alejandro; Barceló Baeza, Pablo; Bienvenu, Meghyn; Calvanese, Diego; Claire, David; Hull, Richard; Hüllermeier, Eyke; Kimelfeld, Benny; Libkin, Leonid
  • Loading...
    Thumbnail Image
    Item
    Solutions and query rewriting in data exchange
    (2013) Arenas Saavedra, Marcelo Alejandro; Barceló Baeza, Pablo; Fagin, Ronald; Libkin, Leonid
  • Loading...
    Thumbnail Image
    Item
    Three iterations of (d − 1)-WL test distinguish non isometric clouds of d-dimensional points
    (2023) Delle Rose, Valentino; Kozachinskiy, Alexander; Rojas González, Luis Cristóbal; Petrache, Mircea Alexandru; Barceló Baeza, Pablo
    The Weisfeiler--Lehman (WL) test is a fundamental iterative algorithm for checking isomorphism of graphs. It has also been observed that it underlies the design of several graph neural network architectures, whose capabilities and performance can be understood in terms of the expressive power of this test. Motivated by recent developments in machine learning applications to datasets involving three-dimensional objects, we study when the WL test is {\em complete} for clouds of euclidean points represented by complete distance graphs, i.e., when it can distinguish, up to isometry, any arbitrary such cloud.Our main result states that the (d−1)-dimensional WL test is complete for point clouds in d-dimensional Euclidean space, for any d≥2, and that only three iterations of the test suffice. Our result is tight for d=2,3. We also observe that the d-dimensional WL test only requires one iteration to achieve completeness.

Bibliotecas - Pontificia Universidad Católica de Chile- Dirección oficinas centrales: Av. Vicuña Mackenna 4860. Santiago de Chile.

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback