¿Entienden los sistemas de inteligencia artificial?

Autores/as

DOI:

https://doi.org/10.24310/crf.16.1.2024.16441

Palabras clave:

Entendimiento, Inteligencia Artificial, Aprendizaje de máquinas, Inteligencia

Resumen

¿Son realmente inteligentes las máquinas inteligentes? ¿Es satisfactorio el concepto subyacente de inteligencia para describir el funcionamiento de estos sistemas? ¿Es el entendimiento condición necesaria y suficiente de la inteligencia? Si una máquina pudiera entender, ¿deberíamos atribuirle subjetividad? Este artículo aborda el problema de decidir si las llamadas "máquinas inteligentes" son capaces de entender, en lugar de limitarse a procesar signos. Trata acerca de la relación entre sintaxis y semántica. La tesis principal se refiere a la inevitabilidad de la semántica en la discusión sobre la posibilidad de construir máquinas conscientes, condensada en dos tesis: "Si una máquina es capaz de entender (en sentido fuerte), entonces ha de ser capaz de combinar reglas e intuiciones"; "Si la semántica no puede reducirse a la sintaxis, entonces una máquina no puede entender". La conclusión apunta a que no es necesario atribuir entendimiento a una máquina para explicar el comportamiento "inteligente" que exhibe; una aproximación meramente sintáctica y mecanicista a la inteligencia, como herramienta para resolver tareas, basta para justificar el rango de operaciones que despliega en el estado actual del desarrollo tecnológico.

Descargas

Los datos de descargas todavía no están disponibles.

Métricas

Cargando métricas ...

Citas

Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al. (2016): “TensorFlow: a

system for Large-Scale machine learning”, in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283.

Achler, T. (2012): “Artificial general intelligence begins with recognition: Eval- uating the flexibility of recognition”, in: Wang, P., Goertzel, B. (eds.) Theoretical Foundations of Artificial General Intelligence, pp. 197–217. New York: Springer.

Alayrac, J. B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al. (2022): “Flamingo: a visual language model for few-shot learning”, Arxiv preprint, arXiv:2204.14198

Baumberger, C., Beisbart, C., Brun, G. (2017), “What is understanding? An overview of recent debates in epistemology and philosophy of science”, in Explaining understanding: New perspectives from epistemology and philosophy of science, pp. 1–34. London: Routledge.

Bellmund, J. L., Gärdenfors, P., Moser, E. I., & Doeller, C. F. (2018). “Navigating cognition: Spatial codes for human thinking”, Science, 362 (6415).

Blanco Pérez, C. A. (2020): The Integration of Knowledge. New York: Peter Lang.

Chomsky, N. (1957): Syntactic structures. Berlin: De Gruyter.

Chomsky, N. (1980): “Rules and representations”, Behavioral and brain sciences 3 (1), pp. 1–15.

Clocksin, W.F., Mellish, C.S. (2003): Programming in PROLOG. New York: Springer.

Das, J. (2019): “Rules and tools of intelligence: How IQ became obsolete”, in Progress in Psychological Science Around the World. London: Routledge, pp. 71-90.

De Raedt, L., Dumančić, S., Manhaeve, R., Marra, G. (2020): “From statistical relational to neuro-symbolic artificial intelligence”, ArXiv

preprint, arXiv:2003.08316

Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D. (2018): “Robust physical-world attacks

on deep learning visual classification”, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634.

Floridi, L., Chiriatti, M. (2020): “Gpt-3: Its nature, scope, limits, and consequences”, Minds and Machines 30 (4), pp. 681–694.

Fodor, J.A. (1995): The Elm and the Expert: Mentalese and Its Semantics. Cambridge: MIT press.

Gardenfors, P. (2014): The Geometry of Meaning: Semantics Based on Conceptual Spaces. Cambridge: MIT press.

Ghotbi, N. (forthcoming): “The ethics of emotional artificial intelligence: A mixed method analysis”, Asian Bioethics Review, pp. 1–14,

https://doi.org/10. 1007/s41649-022-00237-y Goodfellow, I. J., Shlens, J., Szegedy, C. (2014): “Explaining and harnessing adversarial examples”, ArXiv preprint, arXiv:1412.6572

Haugeland, J. (1989). Artificial intelligence: The very idea. Cambridge: MIT press.

Hilliard III, A. G. (1979): “Standardization and cultural bias impediments to the scientific study and validation”, Journal of Research

and Development in Education 12 (2), pp. 47–58.

Kaelbling, L. P., Littman, M.L., Moore, A.W. (1996): “Reinforcement learning: A survey”, Journal of artificial intelligence research 4, pp. 237–285.

Kant, I. (1908): Critique of pure reason. Cambridge: Modern Classical Philosophers.

Landgrebe, J., Smith, B. (2022): Why Machines Will Never Rule the World: Artificial Intelligence Without Fear. London: Taylor & Francis.

LaValle, S. M. (2006). Planning algorithms. Cambridge: Cambridge university press.

LeCun, Y., Bengio, Y., Hinton, G. (2015): “Deep learning”, Nature 521 (7553), pp. 436– 444.

Legg, S., Hutter, M., et al. (2007): “A collection of definitions of intelligence”, Frontiers in Artificial Intelligence and applications 157, p. 17.

Lin, T., Wang, Y., Liu, X., Qiu, X. (2021): “A survey of transformers”. ArXiv preprint, arXiv:2106.04554

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A. (2017): “Towards deep learning models resistant to adversarial attacks”, ArXiv

preprint, arXiv:1706.06083

Merchán, E.C.G., Molina, M. (2020): “A machine consciousness architecture based on deep learning and gaussian processes”, in International Conference on. Moyal, R., Fekete, T., Edelman, S. (2020): “Dynamical emergence theory (det): A computational account of phenomenal consciousness”, Minds and Machines 30 (1), pp. 1–21, https://doi.org/10.1007/s11023-020-09516-9

Murphy, K. P. (2022). Probabilistic machine learning: an introduction. Cambridge: MIT press.

Nagel, T. (1974): “What is it like to be a bat?”, The philosophical review 83 (4), pp. 435–450.

Pearl, J. (2009): Causality. Cambridge: Cambridge university press.

Penrose, R. (1994): Shadows of the Mind. Oxford: Oxford University Press.

Pietka, D. (2020): “The issue of intellectual intuition in metaphysics”, Studia Philosophiae Christianae 56 (S1), pp. 165–185, https://doi.org/10.21697/spch.2020.56.s1.10

Poon, H., Domingos, P. (2011): “Sum-product networks: A new deep architecture”, in 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 689–690.

Puente, C., Sobrino, A., Olivas, J. A., Garrido, E. (2017): “Summarizing information by means of causal sentences through causal graphs”, Journal of Applied Logic 24, pp. 3–14.

Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., Sutskever, I. (2021): “Zero-shot text-to-image generation”,

International Conference on Machine Learning, pp. 8821–8831.

Searle, J.R. (1982): “The chinese room revisited”, Behavioral and brain sciences 5 (2), pp. 345–348.

Seifert, J., Friedrich, O., Schleidgen, S. (2022): “Imitating the human. New human-machine interactions in social robots”, NanoEthics 16 (2), pp. 181–192, https://doi.org/10.1007/s11569-022-00418-x

Sowa, J. F. (1992). Semantic networks. Encyclopedia of artificial intelligence, 2, 1493-1511.

Sullivan, E. (2022): “Understanding from machine learning models”, The British Journal for the Philosophy of Science, https://www.journals.uchicago.edu/doi/10.1093/bjps/axz035

Zadeh, L.A. (1988), “Fuzzy logic”, Computer 21 (4), pp. 83–93.

Zhai, J., Zhang, S., Chen, J., He, Q. (2018): “Autoencoder and its various variants”, in IEEE International Conference on Systems, Man, and

Cybernetics (SMC), pp. 415–419.

Descargas

Publicado

2024-06-04

Cómo citar

Blanco Pérez, C., & Garrido-Merchán, E. (2024). ¿Entienden los sistemas de inteligencia artificial?. Claridades. Revista De Filosofía, 16(1), 171–205. https://doi.org/10.24310/crf.16.1.2024.16441