Talks and presentations

Open Source AI Chatbots

October 19, 2021

Talk, All Things Open, Virtual

There have been major advances in natural language processing in the few years, particularly in developing and refining new network architectures like transformers, that allow chatbots to handle natural language input more robustly. In this talk, we’ll cover what the differences are between machine learning and rule-based approaches to building chatbots and when to use each. We’ll also quickly walk through what you need to know to start building your own AI chatbots using Rasa’s open source framework, as well as practical recommendations for improving any AI chatbot after deployment using conversation driven development.

Chatbots can be good: What we learn from unhappy users

October 11, 2021

Talk, Workshop on Insights from Negative Results in NLP and Oxford Women in Computer Science Distinguished Speaker Series, Virtual

It’s no secret that chatbots have a bad reputation: no one enjoys a cyclical, frustrating conversation when all you need is a quick answer to an urgent question. But chatbots can, in fact, be good. Having bad conversations can help us get there before they’re ever deployed. This talk will draw on both academic and industry knowledge to discuss problems like: What do users’ reactions to unsuccessful systems tell us about what successful systems should look like? Are we evaluating the right things… or the easy to measure things? Do we really have to look at user data? If so, when and how often? When, if ever, should we retire old methods?

Testing, Validation and Evaluation: How Do You Know if Your NLP System Actually Works

September 21, 2021

Talk, Toronto Machine Learning Summit, Virtual

Abstract: If you’ve ever built–or thought about building–an NLP system, you’ve probably run into a few questions: How can you tell if it’s working? How will you know if it continues to work in the future? How do you know when you should you update your models, if ever? Luckily, there are tools to help you! This talk will cover the differences between testing, validation and evaluation, explain why you need all three, and walk through an example with a chatbot system.

5 mistakes you’ll probably make with language data (and how to recover)

September 09, 2021

Talk, New York R Conference, Virtual

Language is fundamentally different from other types of data, and it’s inevitable that you’ll run into some language-specific issues. This talk will cover some of the most common types of errors I’ve seen data analysts and machine learning engineers make with language data, from ignoring the differences between text genres to treating text as written speech to assuming that all languages work like English. We’ll also talk about ways to avoid these common mistakes (and recover gracefully if you’ve already made them).

AI = your data

February 08, 2021

Talk, Rasa Summit 2021, Online

New algorithms may get the press, but the real heart of any AI project is data collection and curation. This talk will show you why getting to know your data is so important and provide best practices for improving your data curation and annotation.

Data Science Portfolios (Updated)

January 18, 2021

Talk, R-Ladies Dallas, Virtual

This talk describes how to put together a data science portfolio that will help you stand out, different kinds of data science jobs and how to tailor your application to shine as a candidate.

What I Won’t Build

July 05, 2020

Keynote, WiNLP Workshop at ACL 2020, Virtual

This talk goes over my own personal development in terms of what ethical NLP looks like and where I currently stand, including a list of the specific types of applications I won’t build.

Intro to BERT-ology

May 14, 2020

Talk, An Evening with BERT - PyLadies Berlin, Online

This talk covers the current research (in May 2020) into how BERT and related models capture information, ablation studies and drawback/weaknesses.

Sociolinguistic Variation and Automatic Speech Recognition: Challenges and Approaches

February 14, 2020

Talk, American Association for the Advancement of Science, Seattle WA

Failing to account for sociolinguistic variation can result in accuracy differences between groups and generally worsens performance for members of minority groups. How to handle sociolinguistic variation in ASR systems, especially systems trained via deep learning, is an area of active research. This talk will introduce current approaches from natural language processing and discuss their benefits and drawbacks.

Intro to Computational Sociolinguistics

May 30, 2019

Talk, 2019 Symposium on Data Science and Statistics, Seattle, WA

All language data, whether text, speech or sign, reflects the social identity of the user and the environment they were in when they produced that language. This systematic social variation in language has been studied in linguistics for decades, but is increasingly important as we build and deploy tools that rely on automatic analysis. Failure to account for sociolinguistic variation can reduce overall system performance or, more worryingly, result in systems that are systematically biased against certain classes of users.

PUT DOWN THE DEEP LEARNING: When not to use neural networks (and what to do instead)

May 04, 2019

Talk, PyCon, Cleveland OH

The deep learning hype is real, and the Python ecosystem makes it easier than ever to neural networks to everything from speech recognition to generating memes. But when picking a model architecture to apply to your work, you should consider more than just state of the art results from NeurIPS. The amount of time, money and data available to you are equally, if not more, important. This talk will cover some alternatives to deep learning, including regression, tree-based methods and distance based methods. More importantly, it will include a frank discussion of the pros and cons of different methods and when it makes sense to use each in practice.

Setting Up Your Public Data for Success

March 25, 2019

talk, AAAI 2019 Spring Symposium “Towards AI for Collaborative Open Science”, Stanford, CA

If you’re sharing your data, you probably want people to actually use it. This talk lays out some concrete strategies you can apply to help interested folks find and use your public data.

Data Structures in R

January 23, 2019

Talk, R Ladies Seattle, Seattle WA

This talk covers the basics of R’s data structures, as well as two data structures that aren’t included in Base R: linked lists and hashtables.

Paper Discussion: The Importance of Being Recurrent for Modeling Hierarchical Structure

November 27, 2018

Talk, Advanced Topics on Machine Learning discussion group, Seattle WA

You may, in fact, need more than attention. This paper is a comparison of the ability of recurrent and non-recurrent (i.e. transformer) neural network structures, focusing on their ability to model hierarchical relationships in natural language. The authors found that for both subject-object agreement and logical entailment, RNN’s outperformed transformers. While there is limited theoretical support for these findings, the empirical results are compelling.que developed by Pearson & Gallagher where students engage with material more independently over time. In this workshop, participants will learn how to apply the I do, We do, You do framework to teaching with Jupyter notebooks. Over the course of the workshop, participants will complete a series of exercises designed to help them use Jupyter notebooks more effectively support active learning in the classroom.

Mixed Effects Regression

September 26, 2018

Talk, Google Developer Group (GDG) Seattle, Seattle, WA

The combination of power, flexibility and clearly interpretable models make it a very powerful technique. I’ll introduce you to the method (no stats background required!), show you how to apply it to your own datasets and walk you through some tricks for clearly visualizing the output.

Data Science Portfolios

September 19, 2018

Talk, R-Ladies, Washington, DC

This talk describes how to put together a data science portfolio that will help you stand out, different kinds of data science jobs and how to tailor your application to shine as a candidate.

I do, We do, You Do: Supporting active learning with notebooks

August 22, 2018

Workshop, JupyterCon, New York NY

The gradual release of responsibility instructional model (also known as the I do, We do, You do model) is a pedagogical technique developed by Pearson & Gallagher where students engage with material more independently over time. In this workshop, participants will learn how to apply the I do, We do, You do framework to teaching with Jupyter notebooks. Over the course of the workshop, participants will complete a series of exercises designed to help them use Jupyter notebooks more effectively support active learning in the classroom.

Reproducible Research Best Practices (highlighting Kaggle Kernels)

August 21, 2018

Workshop, JupyterCon, New York NY

In this workshop, we’ll take an existing research project and make it fully reproducible using Kaggle Kernels. This workshop will include hands-on instruction and best practices for each of the three components necessary for completely reproducible research.

Evaluating and Improving Reproducibility in Machine Learning

August 08, 2018

Talk, Puget Sound Programming Python (PuPPy), Seattle, WA

Reproducibility in machine learning means you can run the same code on the same data and get the same results. While this may seem relatively straightforward, there are plenty of potential pitfalls. In this talk, we’ll discuss a scale for evaluating the reproduciblity of a machine learning project and how to make sure that your own work is easy to reproduce. While this talk is focused on researchers (it’s based on a paper I presented at an ICML workshop), the tips and tricks should apply to anyone who does exploratory data analysis or machine learning generally.

How to Give a Lightning Talk

March 19, 2018

Talk, R-Ladies Seattle, Chicago IL

Lightening talks are quick talks, usually under 5 minutes. The short format makes the great for first time speakers! This is a very meta lightening talk on how to give a lightening talk, and covers how to develop your talk, practice it and some of my best public-speaking tips.

How to find stories in data through visualization

March 09, 2018

Talk, The National Institute for Computer-Assisted Reporting, Chicago IL

Working with data is a kind of interview - it is a complex back-and-forth, drawing out the expressiveness of data. The process is often visual, depending heavily on a sequence of graphical displays, “visualizations.” This three-hour workshop will focus on the concepts and skills you need to use data visualization effectively as part of your reporting practice - to conduct a data interview. You will learn how to spot trends, highlight changes over time, identify outliers, make meaningful comparisons, and describe important patterns in your data - all through the effective use of visualization strategies. This class will be based in the R language and distributed through Jupyter notebooks. These pre-built examples can later be customized to suit your own projects when you return to your newsroom.

Socially-Stratified Validation for ML Fairness

February 13, 2018

Talk, Women in Data Science, Seattle, WA

In this talk, I cover some of the frameworks used to think about fairness in machine learning. Then I turn to more practical matters of determining which social factors are important in machine leaning, how to find appropriate validation data, and considerations when selecting metrics. Finally, I walk through a sample socially-stratified validation pipeline.

Character Encoding and You�

January 23, 2018

Talk, PyCascades, Vancouver BC

Why does your text output have all those black boxes in it? Why can’t it handle Portuguese? The answer is most likely “character encoding”. This talk will cover some of the common character encoding gotchas and cover some defensive programming practices to help your code handle multiple encodings.

Intro to Kaggle: XGBoost!

January 16, 2018

Talk, Metis, Seattle WA

This workshop was both an introduction to Kaggle and a beginner-friendly workshop on XGBoost algorithm. You’ll need to provide some info to watch the video, but the same content is covered in the code.

Why does NLP need sociolinguistics?

September 25, 2017

Talk, Women Tech Makers Seattle, Seattle WA

This talks covers the basics of sociolinguisitics and discusses why it’s important to considering linguistic variation when designing NLP applications.