info@ceoworld.biz
Friday, November 22, 2024
CEOWORLD magazine - Latest - CEO Advisory - An Interview With Economist Daron Acemoglu

CEO Advisory

An Interview With Economist Daron Acemoglu

Kamer Daron Acemoğlu
Kamer Daron Acemoğlu

Exponential technologies will change our world, for good. This is our opinion, as tech optimists.

Now, it’s always good to have a sanity check, or at least have a listen to the contrarian view. We reached out to Daron Acemoglu, who is an Institute Professor of Economics at the Massachusetts Institute of Technology (MIT), a best-selling author, a world-know speaker, and the recipient of several prestigious prizes, among which the John Bates Clark Medal in 2005, given every two years to the best economist in the United States under the age of 40, by the American Economic Association. His resume is impressive, and Acemoglu is today one of the most quoted economists on the planet.

This is the exchange with the Maestro.

What exponential technologies will change our world, in your opinion, by 2030 – between Blockchain, AI, Spatial Computing, Quantum? 

Difficult to say. All of these are highly hyped. Which technologies will flourish and find impactful applications remains to be seen. I don’t believe that blockchain has the potential to have pervasive effects except in a few narrow applications. AI is a different story. Even though much of what is written and said about AI is an exaggeration, there is no doubt that AI is advancing rapidly, and it is being rolled out in many sectors. Just on that basis, I expect it will impact many sectors of the economy, including social media, communication and human resource management. Whether it can penetrate other activities, such as finance, law, education and healthcare remain to be seen. More importantly, even if it does become widely used in sectors such as education and healthcare, there is a big difference between good use and bad use. For example, rolling out ChatGPT to millions of students and encouraging them to learn from the chatbot rather than their teachers is easy. Making sure that they learn the right material and learn effectively is much harder.

AI is the land of the plausible, but not of the profitable for now. What is the business model to make AI technology financially healthy for all? 

I would say it differently. Right now, the only place where AI can be used profitably is in social media and search type activities, and even their profits may come from manipulative uses (deep fakes, extensive information gathering, copyright violations etc.). The key question is whether we can find a socially beneficial direction of AI where displaced technology is deployed for providing better information to humans so that they can make better decisions and become capable of performing more sophisticated tasks. This is unclear and unfortunately it is not the area where industry has much interest in.

I don’t see the tech and cultural transformation of society and economy happening without a revision of our property rights; namely, I need to own my medical records or digital agency, and get compensated for it, if AI or other tech stacks are going to use that (my) data to learn and serve society at large with better products and services. Do you see this happening? Can Meta, Alphabet or Microsoft, for example, accept the fact that they will need to pay more stakeholders than they do today? Is that an illusion? 

Absolutely. Data is a crucial input for AI, and right now there is neither a market for data nor property rights on data nor incentives for people to create high-quality data. This is both unfair (people’s data is being stolen) and unproductive (AI models are being trained on low-quality data, contributing to the hallucinations and other problems of the available models). The solution must involve data markets and property rights on data. But it’s complicated, because individual property rights won’t work. Billions of people produce valuable data but transacting with billions of people would be prohibitively costly. More majorly, lots of people produce highly substitutable data (everybody can recognize a cat, so platforms can play one set of users against another to drive down the price of data). So, we need a new infrastructure for data markets with collective data ownership, for example, in the same way that Writers Guild of America plays the role of an intermediary protecting the rights of creative artists and inducing a type of collective ownership.

Do you think the financial and banking system will accept at some point crypto, or even just bitcoin? What’s your take on the future of the financial system, which should be open for all, and where financial fees should become a commodity, namely equal to zero? 

I hope not. Right now, cryptocurrencies are valuable for three simple reasons: they permit illicit activities; there is some premium because if one of them becomes widely accepted, the company in charge of it would become fabulously wealthy; and there is a bubble component to many cryptocurrencies, with gullible investors sometimes being encouraged to pay excessive valuations for some currencies. None of those sources of valuation are socially beneficial. My view is that cryptocurrencies could be useful in some very narrow circumstances, but they should be very heavily regulated.

Do you think sustainability has had bad marketing so far? Shouldn’t we chat about a total tech and marketing transformation of the world, where the world is run more efficiently, rather? Should sustainability advocates look at tech as the only way to make this world less resource intensive and less biased, and stop talking about ESG as a separate item on our to-do list?  

ESG is a mishmash of many different things, very badly specified. Right now, it just empowers managers to do as they wish. Investors caring about the social good must be an important component of the market economy in the 21st century. For example, I believe that pressure from investors against fossil fuel companies could be quite effective. But this requires a very clear set of guidelines with measurable metrics. In the case of carbon emissions, we can do that. So, my preference would abandon ESG and focus on carbon and perhaps some other important issues, such as labor practices (for example, whether companies are creating highly unsafe environments for workers or put coercive pressure on their employees).

The EU AI Act seems to be the only regulation with some substance globally. Are we looking at a world where the rights of individuals will be sacrificed in the name of progress, if we look at the case of the US and China? Who sets guidelines on what AI should do?  

Yes absolutely, that’s the real danger and China is unlikely to play a leadership role here. The Chinese government is the biggest collector of data. The Chinese Communist Party is unlikely to be interested in protecting individual rights, and the evolution of tech industry in China over the last two decades has created a culture in which intrusive data collection and surveillance have become normalized. The US regulators are still highly beholden to the whims and wishes of the tech industry. So far leadership for reining in abusive practices in AI and the tech industry more generally must come from Europe. This is not sustainable if Europe itself doesn’t become a big player in the AI field, and it doesn’t seem like this is going to be easy. The hope is that the US political pendulum will swing in favor of more robust regulation and less influence of tech companies on policy, and in the meantime, Europe will start becoming an innovator, not just a regulator, of AI.

Are you optimistic or pessimistic about the future of the world?  

Neither. It’s possible to use AI in a pro-worker way. It’s possible to use AI to support democracy and better communication. It’s possible to use AI consistent with individual rights, privacy and individual autonomy. But no, we are not heading in that direction. So if I believe that there will be no major redirection of technology and no major change in institutions In the United States and elsewhere, I would be very pessimistic. But I am clinging to the hope that it is possible for us to do a course correction, rein in the tech giants and start investing in AI that is good for people (and in the process also abandon crazy dreams about artificial general intelligence and disabuse ourselves of our continued trust of the tech giants).


Written by Francesco Pagano.

Have you read?
Best countries for hiring freelancers.
Highest Paid Big Pharma CEOs And Top Executives.
Best Websites For CEOs, business leaders, and high-net-worth individuals.
Best Executive Search Firms And Consultants in America.
Revealed: Most Powerful Women In The World.


Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz
CEOWORLD magazine - Latest - CEO Advisory - An Interview With Economist Daron Acemoglu
Francesco Pagano
Francesco Pagano, Senior Partner at Jakala, Shareholder and Contributor at Il Sole 24 Ore, MIA at Columbia University School of International and Public Affairs (SIPA), 20+ years of Sales & Marketing in corporate and start-up world.


Francesco Pagano is an Executive Council member at the CEOWORLD magazine. You can follow him on LinkedIn.