Michael I. Jordan (Maryland, USA, 1956) is a mathematician and doctor of cognitive sciences. His work is at the origin of AI applications such as chatgpt or recommendation systems. Today he is a professor emeritus at the University of California in Berkeley and a researcher at the Inria of Paris, but he was never interested I am someone who wants to understand things, not just build them, ”he says. His knowledge and that external point of view allows you to observe the generative AI hype with enough skepticism. He has just obtained the Border Border Award from the BBVA Foundation for its trajectory. On his name he does not accept many jokes: Jordan was already MIT professor before the successes of the basketball star and now remains active. “At first it was fun but then I got tired,” he says.
Ask. Is there too much hype with AI?
Answer. The hype will not stop. People who develop this technology love to talk about it and extrapolate, although there is also a bit of pride. What advances humanity is collective efforts. We are not so intelligent individually. These companies develop powerful tools, but a tool lying on the floor is not long ago; It has to be in the hands of a human or a group. Then interesting things begin to happen. If you are going to solve something, if you are going to find new medications, the tool will help you look in the right places and to give advice. If you are going to face climatic challenges, it will help you make better predictions. If you go to art, it will allow you to create new sounds or connections. However, by themselves, they do not solve the problem.
P. Did you expect something like Chatgpt to appear two years ago?
R. It is better than I would have imagined. Although since 1990 it was known that there would be brute force, which is simply doing something, without trying to understand anything directly. Just let an optimization algorithm find everything and do it with huge amounts of data. All these ideas already existed in 1990. The current additional ingredients have been two: the easy access to huge amounts of internet data and then the GPUs [tarjetas gráficas]they really gave him power. When you put all the ingredients together, suddenly it becomes really good. The correct way to think about Chatgpt is a collective presence of all humanity. It takes small pieces of hundreds of millions of people, especially those who wrote good things, such as Wikipedia. Much of his intelligence comes from people’s contributions.
P. Now the Deepseek Chinese model has just broken. Are their advantages of Deepseek as extraordinary as they say?
R. I have not studied it in detail, but it seems that they are significant. It is not entirely surprising, taking into account that the architectures that have worked, based on Transformers and layers in layers, have been designed in a slightly improvised way. Many times it has been enough to use brute force to advance rapidly. But that does not mean that there are no smart tricks or simpler architectures that also work.
P. What can mean its success now for Silicon Valley’s priorities, considering the huge investments in software and data centers?
R. I think Silicon Valley should think more about the generative AI business model and the great language models, and not depend only on the brute force to advance.
P. The founder of Anthropic, Dario A amodei, creator of the Chatbot Claude said a few days ago in Davos that in two or three years AI will be “better than almost all humans in almost everything.”
R. I don’t believe it. That person did not study computer science, linguistics, or social sciences, but their training was in physics. And physicists tend to have a lot of superb about how the universe works. But I think they are underestimating human genius, especially collective genius. Computers already make certain type of mathematics better than any of us. They can also write songs and even, probably, a novel. But I don’t think, in two or three years, write novels like Dostoyevski. His novels speak of the human condition and resonate because they reflect the author’s experience, a person who had a life, who lived. Some of that can be imitated by predicting the following word in many old phrases, but it is not the same.
P. ¿In Deepmind, from Google, are more about tools?
R. That seems fine to me. I make distinctions between companies. Deepmind seems to me one of the most productive companies because they create useful tools. I do not follow all the details, maybe they have some arrogance of the type ‘We are going to solve the problems of the world’, but I think they don’t say so. They only try to create the best possible tools. But there are other crazy people who think that from their tower of AI will have all the knowledge of the world and know everything. If we want an answer to any question, we will go to it and give us the answer. That is very little plausible. Humans have contextual, complex thoughts. And that great machine on the top does not know all those things. To say that some entity will be smarter than us, is that it is not very well defined.
P. He has said that stating that an entity will be smarter than us is very naive.
R. Intelligence has many forms. I like to talk about the intelligence of a market. It is composed of a lot of small decisions. You do not need to know much to make those decisions, but when you join all those decisions within an adequate structure, with incentives and certain types of connections, something incredible arises. That market does amazing things: stabilizes transactions, makes things available, adapts, and much more. It is, in any definition, an intelligent entity. But it is not human intelligence, it is another type of intelligence. There are probably another ten forms of intelligence.
P. When someone says we are going to “solve intelligence”, what do you talk about?
R. To solve them all? To create a mega intelligence that covers them? It seems science fiction and is not very useful for people with a practical approach such as engineers or scientists like me. Nor do I think it helps society to think that we are about to achieve something like that. I think that society helps more to think that we are going to have very powerful tools and that we will find creative ways to use them. 25 -year -old people who arrive in this world believe that their role is to create autonomous robots who dance on stage. And no, its role is, for example, develop a federated and united cars system that allows nobody to die in a car.
P. And the robots presented by Elon Musk that served beer?
R. It is public relations. They are games, toys. Let’s see, it is serious engineering, there is no doubt. But I don’t think it’s a particularly good path. Again, they try to imitate humans, copy them and, therefore, replace them. I don’t think that should be the main objective of technology. It should be to help humans and allow us to do things that we don’t do very well. I will not criticize those robots, they entertain. But there are so many problems that would be a better approach to this technology, instead of having robots entering into flames or going to Mars.
P.Why does the generative AI have no more collective and general objectives?
R. The generative AI is sexy. He does impressive things. You teach what everyone does and automatically assumes that there is a super technology behind. Result: We invest! And much of this is driven by the desire to get 100 million in a financing round. But the reality is that, if you enter any company that solves real world problems, such as moving packages from one place to another, guaranteeing the safety of people or improving education, in those companies they are all sitting at the same table working together. It is an engineering approach to solve real problems. Of course, they will use generative AI tools for certain things, but they will not spend all their time developing tools only to obtain a millionaire assessment. Although most companies do use some generative, their business model does not revolve around that. Instead, many of these Startups That they develop generative and achieve very high assessments they will not survive, because they do not have a solid business model.
P. Are we there today?
R. Yes, and I don’t want to say that I know everything because I have a lot of experience, but Elon Musk has promised self -employed cars like five times. Every year he said: “We are going to have them now.” And he has not succeeded because he did not understand how difficult it is. You have Waymo, which I think is a more successful company in this regard. And there are Waymo cars in San Francisco working, but in a simple way. They move relatively slowly, they are relatively safe, and that’s fine. He is taking time. These types of engineering projects are not things that are resolved in two years, are rather ten projects. And that we are talking about cars, which are not as complex as, for example, the human body and medicine or climate physics. As soon as you get into any of those problems, you realize that absolute complexity begins to import a lot.
P. What would you say to those who fear losing their work replaced by a AI?
R. First I would say that we need more work economists in this discussion. When we talk about regulation, Europe has the habit of launching regulations first and then thinking. That is a very bad idea. First you have to understand the phenomenon. Then you add some regulation to make sure you create good balances. This control approach from top to bottom about technology, just because people are afraid of it, it is a very bad idea. That doesn’t mean you shouldn’t think about it. Clearly some works will disappear, and some may need to be protected. You may have to slow down the process. If the works disappear in one or two years, it is too fast. But if it takes 10 years, it is better: that gives time for people to adjust and understand that certain jobs, such as listening to a conversation and summarizing it, can do them. If your dream was to be the person who takes notes in a meeting, he better thinks of another career.
P. And what would you say to a young man who enters the university in the AI era?
R. It is not true that they should not make mathematics because computers will do everything. New problems will arise, and they will not be the old computer problems. There will be new problems in computer science, and it will not be tried to learn to program in Fortran or C. You will try to integrate things into larger systems. The machines will not do it alone. If you are the guy who builds and understand how to integrate things, there will be lots of jobs.