How can linguistic meaning be learned, represented, and used by machines? Advances in how artificial intelligence and natural language processing have approached linguistic meaning have led to dramatic improvements in tasks like language translation, speech recognition, question answering, spoken dialogue systems, and many others. In this presentation, I will talk about semantics, linguistic meaning, and different ways those meanings can be learned and represented in machines; including embeddings like word2vec, ELMO, GLOVE, BERT; grounded approaches like Words-as-Classifiers; and classical approaches like First order Logic. We'll see examples of how some of these approaches to semantics are learned using neural networks, their strengths, and their limitations.
Assistant Professor at Boise State University in the Department of Computer Science. PhD in Computational Linguistics from Bielefeld University, Germany. Researching spoken dialogue systems and language acquisition.