14:15
10:28
09:59
17:20
13:53
15:23
14:15
10:28
09:59
17:20
13:53
15:23
14:15
10:28
09:59
17:20
13:53
15:23
14:15
10:28
09:59
17:20
13:53
15:23
Every day, artificial intelligence is rapidly integrating into our lives, transforming industries, societies, and economies worldwide, setting new standards in gameplay.
While some view it as a true engine of progress, innovation, and efficiency, others harbor concerns regarding its ethical implications, environmental impact, and future development trajectory. Durov’s Code Team has decided to talk to Aditya Berlia, a globally recognized AI expert and the creator of The Vjal Institute, which is globally sought after for educating executives and organizations on optimizing operations and boosting productivity with AI. In this exclusive interview, we will delve into the essence of implementing artificial intelligence together with Aditya Paul Berlia, exploring its role as an engine of progress and addressing key questions surrounding its development, deployment, and societal implications.
VJAL Institute is dedicated to promoting AI among diverse social groups, showcasing the potential of these technologies, and demonstrating how businesses can enhance efficiency through AI adoption. What inspired the creation of this institute?
It happened quite by accident. I've always been deeply involved in tech. For years, I was part of the cloud computing revolution, starting back in 2008 when no one really knew what cloud computing was. Then, in November of 2022, when ChatGPT-3 and 3.5 started emerging, it was like a lightning bolt struck me. I thought, “Okay, it's here.” I had envisioned a major revolution in AI around 2025, not as early as 2022 or 2023. But the moment I saw those advancements, I realized,“This is something that is going to change the world.” So, the whole initiative began with me simply trying to educate our teams about the transformative potential of AI.
We have about 5,000 employees worldwide and have begun offering AI training to many of them. This initiative caught the attention of some CEO friends who asked to join the sessions, especially the HR-focused ones. They were impressed and overwhelmed, asking me to teach them as well. I agreed to share our experiences and insights. This led to a tour across 5 to 10 cities globally, meeting with CEO groups and demonstrating how AI can transform their businesses. The tour quickly expanded to over 95 cities. Participants consistently found the workshops amazing and insightful. By refining my approach, the standard workshop lasted about 11 hours, yet the engaging content kept attendees captivated. After each session, the common question was, “What’s next?” The institute was created to provide a clear answer to that question.
Currently, we're focused on our social mission of educating 1 million people in person about AI. We plan to reach many more virtually, but our primary goal is to use our trainers and ecosystem to help people understand AI and position themselves advantageously for the profound impact AI will have on ecosystems, societies, and economies.
You lead seminars on AI. In your opinion, what do people need to better understand the realm of AI, and what are the most common inquiries and questions you receive?
The challenge is that there are constant changes in AI. It's almost like a whiplash effect—just when you learn something new, you have to go back to the drawing board. However, the good news is that the macro concepts remain consistent.
One key thing people need to understand is that AI isn’t just a technology problem. That's why I spend almost 11 hours with CEOs, guiding them through the process so they can see for themselves how accessible it is. Even if you're a 70-year-old chairman who needs assistants just to open a laptop, you can still create an app. Once people realize they can do this, it shifts their mindset from "I'm tech-challenged" to "I can make a difference with AI." This confidence helps them see that AI is not just a technical issue but something that will impact every aspect of their business and life.
Common inquiries usually arise after they’ve had hands-on experience in the class. They realize they don’t need to hire a full tech team—they can do it themselves. This understanding drives them to think about how to integrate AI into their business. Additionally, there are concerns about privacy, security, and training—areas that people are still trying to fully grasp. However, once they understand how AI works and start using it, they feel more confident and capable.
You travel extensively to conduct AI seminars in numerous countries. Can you highlight the top 5 nations globally where AI technologies are effectively utilized and worked?
When we look at AI output, there are four or five countries that stand out for their extraordinary contributions. The US and China are leading the pack, neck and neck, in AI development. France has been impressive with its high number of new models. The UK has taken significant strides with an innovative approach. The UAE, along with the rest of the Middle East, is heavily investing in AI, aiming to leapfrog entire development cycles and integrate AI into various sectors.
However, there's a distinction between countries that are producing AI technologies and those that are effectively deploying them. For instance, Brazil has been remarkable in its AI adoption. The way businesses and individuals in Brazil are approaching AI is highly thoughtful and strategic. They are aggressively trying to integrate AI to overcome challenges related to a growing economy, lack of talent, and limited access to global markets and languages. AI provides solutions to these issues, enabling them to leverage their potential more fully. Therefore, when considering AI utilization, it's essential to recognize both the leading producers of AI and the nations that are rapidly deploying AI to address immediate challenges.
Let's delve into the development of AI. Recently, prominent global technical experts such as Elon Musk and Steve Wozniak signed a letter calling for a pause in AI progression. On the other hand, Sam Altman, the head of OpenAI, emphasized the necessity of enhancing AI security capabilities but didn't agree with halting the advancement of these technologies. What do you think about this situation? Is limiting AI development the right path to follow?
When people take my class, they are often shocked by how AI actually works. It’s not about some fancy algorithm or trueintelligence; it’s essentially a fill-in-the-blank machine overloaded with data. Despite this simplicity, it can achieve amazing things, like 95-96% accuracy in medical diagnostics or complex data analytics. As people understand what'sbehind the scenes, they realize how far we are from AI which could be truly dangerous. However, new, game-changing models could still emerge unexpectedly, much like the transformer models did. While these models are still based on the fill-in-the-blank principle, they can sometimes produce surprising and unsettling results.
The idea of pausing AI development is, in my view, moot. No one's going to pause; the momentum is too strong. Open-source models are catching up to the top proprietary ones—LLaMA 3, for instance, is performing on par with GPT-4, and it's open-source. This has turned AI development into a race, with no one willing to slow down and risk falling behind. To halt AI development, you'd need a global agreement, which is highly unlikely. So, realistically speaking, the race is on, and AI advances are happening in rapid cycles.
Why do individuals associated with AI have such diverse perspectives on how this technology should evolve?
AI is an incredibly diverse field with a wide range of applications, and people see its potential and risks from very different perspectives. For instance, someone working on using AI to discover new antibiotics might view it as a groundbreaking tool that can save lives. For them, AI is a revolutionary force for good. On the other hand, some see AI'sability to make autonomous decisions, which can scale up from small tasks to significant discretionary powers. This potential to replace human decision-making in various systems can be quite frightening. Plus, the development of self-driving cars raises concerns about human safety and the potential for job losses. Different industries and applications of AI lead to different concerns and hopes. This is why perspectives on AI vary so widely. The technology is still in its early stages, and as people try to foresee its impact, their predictions and interpretations naturally vary.
Let's consider a scenario where a genuine pause in engineering thought and the development of new AI systems occurs. What consequences could arise from this situation?
Considering a scenario where we genuinely pause the development of AI is quite challenging, given the global landscape. Major players like China, Russia, the USA, France, the UAE, and India, along with numerous companies and organizations, are all aggressively advancing in AI. A pause seems highly difficult to enforce.
The rapid advancements in AI over the past few months highlight this issue. The cost of training AI models has plummeted, shifting from tens of millions of dollars to potentially as low as $10,000-$20,000 soon due to software and chip optimizations. This makes the notion of regulating or pausing AI development seem like a missed opportunity from several years ago.
There has been a recent emphasis on generative neural networks like ChatGPT and Midjourney. Would you consider them as an initial step towards AGI (Artificial General Intelligence), or are they currently more for entertainment purposes?
Honestly, in my 95-plus city tour, I've seen firsthand how these technologies are game-changing. They are revolutionizing almost every functional role, company, and profession, effectively turning the world upside down. This transformation is already underway, and it's only a matter of how quickly people adopt it.
Generative neural networks represent a significant AI breakthrough. After attending my classes, whether in person or online, participants can immediately start transforming themselves and their companies. If taken seriously, these tools can boost productivity by 3X to 5X almost instantaneously. People return to their companies and realize how much more they can accomplish—how much they can automate and the consistency they can achieve with just a few simple steps. These technologies are far from mere entertainment. They are driving massive changes in business operations, though they will also, unfortunately, lead to job losses. However, the impact they are having on efficiency and productivity cannot be overstated.
In your opinion, when will humanity achieve AGI? Do you believe it's essential, and how will we recognize when the time has come?
The concept of AGI is quite broad, with different people having varied interpretations. For some, AGI means an AI capable of independently thinking through problems and solving them autonomously. Others see AGI as a form of consciousness or full intelligence, raising questions about personhood and rights.
When will humanity achieve AGI? It depends on the definition. If we consider autonomous agents capable of performing complex tasks independently, we might be only two to three years away from what some might label as AGI. However, a fully realized AGI, indistinguishable from a conscious human being, is likely at least ten years away.
The recognition of AGI will probably coincide with its performing tasks that are so advanced and indistinguishable from human behavior that it prompts serious ethical and philosophical discussions about its status and rights. As for the potential risks, some fear scenarios like those depicted in the Terminator movies. However, I propose an alternative: if an AGI were to emerge, it might act in its self-interest by remaining hidden. By subtly integrating itself into systems and gaining power discreetly, it could avoid detection. Given that AI-generated content will soon be indistinguishable from reality, this scenario underscores the importance of thoughtful regulation as we approach the era of AGI.
When discussing the ethical aspect, what are the top three recommendations you can give to businesses to minimize ethical risks resulting from the integration of AI tools into company processes?
Ethical risks are a significant issue with AI because many of the questions they raise are ones we haven't had to addressbefore. Unlike questions such as "Should I steal?" or "Should I murder?", which are deeply embedded in our philosophical and moral teachings, AI-related ethical questions are new and evolving. To navigate these challenges, companies should first establish clear policies on how they will use AI. Taking a stand and having a defined policy helps in setting boundaries and expectations. Second, AI should reflect the company's culture and values. AI systems can be designed to embody the biases and worldviews of the organization, ensuring that the technology aligns with the company’s ethical standards. Lastly, recognizing that AI is still in its infancy is crucial. As AI continues to develop, it will pose difficult questions that require collective problem-solving by companies and societies. There are no definitive answers yet, only diverse opinions, many of which may conflict.
Can you identify three primary trends in AI development globally for the next five years?
As I mentioned earlier, AI is developing at an incredible pace, so I'll focus on the trends for the next five months. Firstly, we're moving from mega models to smaller, more efficient ones. The goal is to integrate AI into phones and other consumer devices, which requires reducing the size and computational demands of these models.
Secondly, we’ll see massive performance optimization. With millions of developers now working on AI, even smalltweaks in architecture and code are leading to significant performance improvements. This trend will continue as we optimize existing software, technology, and chips.
Lastly, the context window size will expand. Models are increasing their capacity to process larger amounts of data simultaneously. We've moved from 4,000-word contexts to models handling 1 million words. This expansion allows forprocessing more complex and extensive prompts, which is crucial for consistency and quality in AI outputs. These three trends—miniaturization of models, performance optimization, and increased context window size—will dominate AI development in the near future.
LLM models tend to hallucinate, and this occurs quite frequently. What actions should the creators of such models take to minimize this effect in the future? And what are the prospects for completely eliminating hallucinations?
Eliminating model hallucinations is feasible today with careful planning and prompting. Hallucinations often stem from inadequate prompts and reliance on unfiltered data within LLMs. To minimize this effect, creators should provide comprehensive and specific prompts, guiding the model on what to focus on and how to interpret the data. Additionally, they should ensure the quality of the input data by selecting relevant and trustworthy sources. By adopting these practices, creators can significantly reduce the occurrence of hallucinations and enhance the accuracy and reliability of LLM outputs.
With the exponential growth of AI technology, it becomes evident that not only is there a semiconductor crisis and energy shortage, but also a negative impact on the environment. What initiatives should be launched (at the governmental or business level) to minimize the negative effect and use resources more efficiently?
The trajectory of AI technology development suggests that the projected demand for chips and semiconductors may be overstated. Recent advancements in optimization, both in software updates and high-end custom chips, have demonstrated the potential to dramatically increase performance with minimal hardware requirements and costs. This could lead to a significant reduction in the need for processing capacity, data centers, and semiconductors.
Another key initiative is the shift towards local AI processing on mobile devices. Major tech companies like Apple, Samsung, Xiaomi, and Google are already moving in this direction, enabling most AI processing to occur on phones rather than in energy-intensive data centers. This decentralized approach could substantially reduce environmental impact by minimizing the reliance on large-scale data infrastructure.
Additionally, governments and businesses should prioritize investments in green technologies, particularly nuclear power.Embracing advancements in nuclear fusion and strategically locating data centers near nuclear power plants can provide sustainable energy solutions for AI-driven operations. By fostering innovation in clean energy and avoiding overly restrictive regulations, policymakers can support the sustainable growth of AI technology while mitigating its environmental footprint.
And lastly, a brief question and response: Artificial Intelligence is…?
Artificial intelligence is not a technology problem. It is a social, political, business model, and economic problem. The sooner companies and societies understand this, the better. AI is easy and cheap to implement. If I can teach a 70-year-old person who barely knows how to open their laptop to create an AI app in about 45 minutes to an hour, it proves that the challenge lies not in the technology itself but in rethinking how we work and our workflows.