Co-Founder and CEO
apply(conf) - May '22 - 10 minutes
In this presentation, Clément will provide insights into the revolution taking place in the open-source community with machine learning. From the CEO who is on a mission to create the “Github of Machine Learning,” learn how the best-in-class companies and talent are using Hugging Face’s tools, and why the open source approach is particularly powerful in doing so.
Hi, everyone. Super happy to be here. I’m Clem. I’m the co-founder and CEO at Hugging Face here, and my topic for this talk is are transformers becoming the most impactful tech of the decade? I’m going to take advantage of the stage to talk a little bit also about a topic that is really, really important to me, the ethics of AI, and some of the things that we can do today to make sure that this transformation is actually positive for the world.
First, to start with, I wanted you to think a little bit about your day. Think a little bit about what you’re doing every day, and notice some kind of like new features that appeared in the past few years or in the past few months. Maybe you’ve noticed that when you’re going on search engines now, like Google or Bing, and you’re searching something in natural language, like, “What is the color of Kim Kardashian’s hair?” the search engines got much, much better at giving you good results. Maybe you noticed that when you’re typing an email, when you’re typing a message on LinkedIn, when you’re typing emails on your phone, you’re going to get this auto-complete that is starting to be more and more accurate, more and more useful.
Maybe you’ve noticed that when you go on social networks, you are now seeing translation done automatically, very useful for me. I’m French, obviously, as you can hear from my accent, or you’re going to see kind of more moderation for offensive content. Maybe you’ve noticed, if you’re a software engineer, that when you’re writing code now, and you have access to the Copilot feature in GitHub, that it auto-completes your code.
What’s interesting about all these features is that they are all powered by transformers, by transfer learning architecture, and there are many, many more, right? If you’re thinking, for example, of some of these backgrounds that I’m using right now, if you’re thinking of ordering your Uber and an ETA for your Uber, you can’t spend a day without using some features that are powered by transformers.
How did that happen? It all actually started not so long ago, in 2017, when this paper got released called Attention Is All You Need, which created a new basis for new architecture for machine learning models. A year later was released a model called BERT, based on attention mechanisms. That is going to really kind of change the way you do machine learning and change the machine learning capabilities tremendously.
Why did that happen? To me, it happened because of the conjunction of three important trends. First, the compute power, especially with GPUs for example, TPUs are starting to be huge. You started to have very large, openly accessible datasets, basically the web, that is going to provide these training capabilities. And finally, the last missing piece to the puzzle was transfer learning, right? This ability to pre-train on a large dataset and then fine tune on the smaller and smaller datasets. All of that led to the apparition of transformers.
What did transformers do? They basically started to beat the state of the art for every single NLP task. Here, you’re looking at the blue benchmark, which is kind of like an aggregation of different task evaluations. You can see that in a very short period of time, from kind of May 2018 to June 2019, it grew tremendously, from 69% accuracy to 88% accuracy. And just to give an order of magnitude, when you ask humans to do these first same NLP tasks, which are very simple, simple tasks, you see that usually, they’re a little bit lower than this 80% accuracy. It’s obviously not to say that AI is human level today. It’s not, and it’s not going to be in the future, but it shows you how much accurate it became thanks to transformers and thanks to transfer learning architectures.
Because of that, we started to see at Hugging Face, an amazing adoption for a library that we created that is called Transformers, which is the most popular library today to use transformer models, that we released in 2019, and that just got fantastica adoption. Here on this graph, you can see the number of GitHub stars growing faster than very popular other open source technologies, like Spark, like Kafka, like Mongo.
And all of that led to a multiplication of models, and it really led to machine learning and transformer models being used by so many companies. This is the interface of our machine learning platform at Hugging Face. We’ve been called a GitHub for machine learning, where we’ve seen, almost, I think that’s the number from a few weeks ago. Now we’ve seen almost 100,000 machine learning models that have been shared on the platform, but also 10,000 companies using us to build any sort of workflow, any sort of feature, any sort of product that is machine learning based.
What we’re seeing, which is pretty phenomenal, is that thanks to transformers, machine learning is almost becoming the new default way of building technology. We’re seeing more and more company, when they’re starting to do a new feature, a new workflow, new product, they actually start with transformers in mind or they start with machine learning in mind, and if it doesn’t work, they almost fall back to this old-school way of building technology, which is writing a million lines of code without any machine learning.
This is obviously super exciting, but there are also very important limitations today. Because transformers became so useful, so mainstream, and you can interact with them every day, it’s very important now that we ask ourselves the question of the ethical challenges that it creates, especially because if you look at the most popular transformer models, BERT, and you ask it, as we do here, to fill the missing words when you say, “This man works as,” and, “This woman works as a,” you’ll see that it’s extremely biased. For men, it predicts lawyer, carpenter, doctor, waiter, mechanic, and for women, it predicts, nurse, waitress, teacher, maid, prostitute. So there’s a problem there. We have a big problem of bias with these models.
This is why we think, at Hugging Face, that it’s the right time to invest heavily on machine learning ethics. This is one of the reasons we’ve added to the team, Dr. Margaret Mitchell, who’s one of the most recognized people in the field of AI ethics, who really is inspiring us to create more value-informed processes when we build machine learning. I wanted to share a couple of examples that other organizations, other people could adopt on this topic. For example, Dr. Margaret Mitchell pioneered this concept of building model cards for models, so at Hugging Face, we’ve become the biggest repository of model cards, which are a way to communicate about the biases, for example, this BERT gender bias that I was telling you about is advertised in the model cards, so that when practitioners use these models, they can do it the right way. For example, if you’re doing a hiring product, and they want to use BERT, obviously it’s important for them not to use it to filter resume, because we know, thanks to this example, that it’s going to be gender biased.
Another initiative that we’ve had, that we released a couple of weeks ago, is the Data Measurements Tool, which is a fantastic tool that you should definitely try, to analyze your datasets and try to find limitations of your datasets, the biases, for example, in your datasets, to make sure you can understand that to use models properly, and hopefully reduce or mitigate some of these biases. These were a couple, like just two examples of what we can do now that transformers, machine learning is really becoming mainstream, in order to make sure that it’s positive for the world, and that’s kind of some of the ways and some of the work that I really believe strongly that we need to tackle, especially as transformers started with NLP, and now they’re making their way into computer vision, into time series, into speech, into biology, into chemistry.
There are so many problems that can be solved with machine learning, so by taking a more value-informed and more ethical approach to machine learning, we can make it, as I think we’re capable of, the most positive new technology paradigm of the decade. Thanks so much. Happy to continue the conversation on another medium. Thanks. Thanks, everyone.
Interested in trying Tecton? Leave us your information below and we’ll be in touch.
Interested in trying Tecton? Leave us your information below and we’ll be in touch.