The AI boom,[1][2] or AI spring,[3][4] is the ongoing period of rapid progress in the field of artificial intelligence. Prominent examples include protein folding prediction and generative AI, led by laboratories including Google DeepMind and OpenAI.
The AI boom is expected to have a profound cultural, philosophical,[5] religious,[6] economic,[7] and social impact,[8] as questions such as AI alignment,[9] qualia,[5] and the development of artificial general intelligence[9] became widely prominent topics of popular discussion.[10]
History
In 2012, a University of Toronto research team used artificial neural networks and deep learning techniques to lower their error rate below 25% for the first time during the ImageNet challenge for object recognition in computer vision. The event catalyzed the AI boom later in that decade, when many alumni of the ImageNet challenge became leaders in the tech industry.[11][12] The generative AI race began in earnest in 2016 or 2017 following the founding of OpenAI and earlier advances made in graphical processing unit, the amount and quality of training data, generative adversarial network, diffusion model and transformer architecture.[13][14] In 2018, the Artificial Intelligence Index, an initiative from Stanford University, reported a global explosion of commercial and research efforts in AI. Europe published the largest number of papers in the field that year, followed by China and North America.[15] Technologies such as AlphaFold led to more accurate predictions of protein folding and better drug development.[16] Economic researchers and lawmakers began to discuss the impact of AI more frequently.[17][18] By 2022, large language models saw increased usage in chatbot applications; text-to-image-models could generate images that appeared to be human-made;[19] and speech synthesis software was able to replicate human speech efficiently.[20]
According to metrics from 2017 to 2021, the United States outranks the rest of the world in terms of venture capital funding, the number of startups, and patents granted in AI.[21][22] Scientists who have immigrated to the U.S. play an outsize role in the country's development of AI technology.[23][24] Many of them were educated in China, prompting debates about national security concerns amid worsening relations between the two countries.[25]
Experts have framed AI development as a competition for economic and geopolitical advantage between the United States and China.[26] In 2021 an analyst at the Council on Foreign Relations outlined ways that the U.S. could maintain its position amid progress made by China.[27][28] In 2023 an analyst for the Center for Strategic and International Studies advocated that the U.S. use its dominance in AI technology to drive its foreign policy instead of relying on trade agreements.[21]
Advances
Biomedical
There have been proposals to use AI to advance radical forms of human life extension.[29]
AlphaFold 2 score of more than 90 in CASP's global distance test (GDT) is considered a significant achievement in computational biology[30] and great progress towards a decades-old grand challenge of biology.[31] Nobel Prize winner and structural biologist Venki Ramakrishnan called the result "a stunning advance on the protein folding problem",[30] adding that "It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research."[32] AlphaFold 2's success received widespread media attention.[33]
The ability to predict protein structures accurately based on the constituent amino acid sequence is expected to have a wide variety of benefits in the life sciences space including accelerating advanced drug discovery and enabling better understanding of diseases.[31][34] Writing about the event, the MIT Technology Review noted that the AI had "solved a fifty-year old grand challenge of biology."[35] It went on to note that the AI algorithm could "predict the shape of proteins to within the width of an atom."[35]
Images and videos
In 2016, artificial intelligence was used to alter images and videos of real people, faking their actions or speech.[36]
Text-to-image models captured widespread public attention when OpenAI announced DALL-E, a transformer system, in January 2021.[37] A successor capable of generating complex and realistic images, DALL-E 2, was unveiled in April 2022.[38] An alternative text-to-image model, Midjourney, was released in July 2022.[39] Another alternative, open-source model Stable Diffusion, released in August 2022.[40]
Following other text-to-image models, language model-powered text-to-video platforms such as DAMO,[41] Make-A-Video,[42] Imagen Video[43] and Phenaki[44] can generate video from text and/or text/image prompts.[45]
Language
GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text that can be hard to determine whether it was written by a human.[46] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.[47] A new version called GPT-4 was released on March 14, 2023, and was used in the Microsoft Bing search engine.[48][49] Other language models have been released such as PaLM and Gemini by Google and LLaMA by Meta Platforms.
In January 2023, DeepL Write, an AI-based tool to improve monolingual texts, was released.[50] In December 2023, Gemini, the latest model by Google, was unveiled, claiming to beat previous state-of-the-art-model GPT-4 on most benchmarks.[51]
Music and voice
In 2016, Google DeepMind unveiled WaveNet, a deep learning network that produced English, Mandarin, and piano music.[52] 15.ai, released in 2020, was one of the first publicly available speech synthesis software that allowed users to generate natural, emotive, high-fidelity voices from text to resemble fictional characters.[53][54] ElevenLabs allowed users to upload voice samples and create audio that sounds similar to the samples. The company was criticized after controversial statements were generated based on the vocal styles of celebrities, public officials, and other famous individuals,[55] raising concerns that the technology could make deepfakes even more convincing.[56] An unofficial song created using the voices of musicians Drake and The Weeknd raised questions about the ethics and legality of similar software.[57]
Impact
Cultural
During the AI boom, there emerged differing factions. These include the effective accelerationists, effective altruists, and catastrophists.[58]
Dominance by tech giants
The commercial AI scene is dominated by American Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft, whose investments in this area have surpassed those from U.S.-based venture capitalists.[59][60][61] Some of these players already own the vast majority of existing cloud computing infrastructure, which could help entrench them further in the marketplace.[62]
Intellectual property
Tech companies have been sued by artists and software developers for using their work to train AI models.[63]
Concerns
Economic disruption
There are concerns that as AI becomes more sophisticated, it will perform better than human workers and be more cost-effective.[64][17]
Risks to humanity
Part of a series on |
Artificial intelligence |
---|
Many experts have stated that the AI boom has started an arms race in which large companies are competing against each other to have the most powerful AI model on the market with little concern about safety.[65] During the AI boom, numerous safety concerns have been expressed by experts.[66] In particular, there have been concerns about the development of powerful models with speed and profit prioritized over safety and user protection.[65] There have already been significant numbers of reports about racist, sexist, homophobic, and other types of discrimination from ChatGPT, Microsoft's Tay, and leading AI facial recognition models.[67] It has been estimated that there are 80 to 120 researchers globally[67] working to understand how to ensure AI is aligned with human values. With incomplete understanding about how AI works,[67] many researchers around the globe have voiced concerns about potential future implications of the AI boom.[66] Public reaction to the AI boom has been mixed, with some parties hailing the new possibilities that AI creates,[68] its potential for benefiting humanity, and sophistication, while other parties denounced it for threatening job security, and for giving 'uncanny' or flawed responses.[69][70][71][72]
In the midst of the AI boom, the hype surrounding artificial intelligence has been described as posing significant dangers. The enthusiasm and pressure generated by public fascination with AI can drive developers to expedite the creation and deployment of AI systems. This rush may lead to the omission of crucial safety procedures, potentially resulting in serious existential risks. As noted by Holden Karnofsky, the imperative competition to meet consumer expectations might tempt organizations to prioritize speed over thorough safety checks, thus jeopardizing the responsible development of AI.[73]
The prevailing AI race mindset heightens the risks associated with the development of artificial general intelligence.[74] While competition can foster innovation and progress, an intense race to outperform rivals may encourage the prioritization of short-term gains over long-term safety.[75] A "winner-takes-all" mentality can further incentivize cutting corners, potentially creating a race to the bottom and compromising ethical considerations in responsible AI development.[75]
Prominent voices in the AI community have advocated for a cautious approach, urging AI companies to avoid unnecessary hype and acceleration.[73] Concerns arise from the belief that pouring money into the AI sector too rapidly could lead to incautiousness from companies, as they race to develop transformative AI without due consideration for key risks.[75][73] Despite prevailing hype and investment in AI, some argue that it is not too late to mitigate the risks associated with acceleration. Advocates for caution stress the importance of raising awareness about key risks, strong security procedures, and investing in AI safety measures, such as alignment research, standards, and monitoring.[73]
Regulations
In December 2023, the European Union finalized the world's first comprehensive set of rules in the form of the Artificial Intelligence Act to address the risks associated with AI, but experts fear that the legislation will lag behind technological progress being made in the field.[76]
See also
- AI winter, a period of reduced funding and interest in artificial intelligence research
- History of artificial intelligence
- History of artificial neural networks
- Hype cycle
- Technological singularity
References
- ↑ Knight, Will. "Google's Gemini Is the Real Start of the Generative AI Boom". Wired. ISSN 1059-1028. Retrieved December 12, 2023.
- ↑ Meredith, Sam (December 6, 2023). "A 'thirsty' generative AI boom poses a growing problem for Big Tech". CNBC. Retrieved December 12, 2023.
- ↑ Bommasani, Rishi (March 17, 2023). "AI Spring? Four Takeaways from Major Releases in Foundation Models". Stanford Institute for Human-Centered Artificial Intelligence. Archived from the original on May 7, 2023. Retrieved May 16, 2023.
- ↑ "The coming of AI Spring". www.mckinsey.com. Retrieved December 7, 2023.
- 1 2 Huckins, Grace (October 16, 2023). "Minds of machines: The great AI consciousness conundrum". MIT Technology Review. Retrieved December 12, 2023.
- ↑ Robertson, Derek (July 18, 2023). "The religious mystery of AI". POLITICO. Retrieved December 12, 2023.
- ↑ Lu, Yiwen (June 14, 2023). "Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says". The New York Times. ISSN 0362-4331. Retrieved December 12, 2023.
- ↑ Tomašev, Nenad; Cornebise, Julien; Hutter, Frank; Mohamed, Shakir; Picciariello, Angela; Connelly, Bec; Belgrave, Danielle C. M.; Ezer, Daphne; Haert, Fanny Cachat van der; Mugisha, Frank; Abila, Gerald; Arai, Hiromi; Almiraat, Hisham; Proskurnia, Julia; Snyder, Kyle (May 18, 2020). "AI for social good: unlocking the opportunity for positive impact". Nature Communications. 11 (1): 2468. doi:10.1038/s41467-020-15871-z. ISSN 2041-1723. PMC 7235077.
- 1 2 Tong, Anna; Dastin, Jeffrey; Hu, Krystal (November 23, 2023). "OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say". Reuters. Retrieved December 12, 2023.
- ↑ Milmo, Dan (October 24, 2023). "Hope or horror? The great AI debate dividing its pioneers". The Guardian. ISSN 0261-3077. Retrieved December 12, 2023.
- ↑ "The data that transformed AI research—and possibly the world". Quartz. July 26, 2017.
- ↑ Lohr, Steve (November 30, 2017). "A.I. Will Transform the Economy. But How Much, and How Soon?". The New York Times.
- ↑ "Everything You Need To Know About The Artificial Intelligence Boom". Nasdaq.com. Investing Daily. August 22, 2018.
- ↑ "Why am I not terrified of AI?". Shtetl-Optimized. March 6, 2023. Archived from the original on May 12, 2023. Retrieved March 19, 2023.
- ↑ Statt, Nick (December 12, 2018). "The AI boom is happening all over the world, and it's accelerating quickly". The Verge.
- ↑ Wong, Matteo (December 11, 2023). "Science Is Becoming Less Human". The Atlantic. Retrieved December 12, 2023.
- 1 2 Lohr, Steve (November 30, 2017). "A.I. Will Transform the Economy. But How Much, and How Soon?". The New York Times.
- ↑ "Nine charts that really bring home just how fast AI is growing". MIT Technology Review.
- ↑ Vincent, James (May 24, 2022). "All these images were generated by Google's latest text-to-image AI". The Verge. Archived from the original on February 15, 2023. Retrieved March 15, 2023.
- ↑ Cox, Joseph (January 31, 2023). "AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse". Vice. Archived from the original on May 7, 2023. Retrieved March 15, 2023.
- 1 2 Frank, Michael (September 22, 2023). "US Leadership in Artificial Intelligence Can Shape the 21st Century Global Order". The Diplomat. Retrieved December 8, 2023.
- ↑ "Global AI Vibrancy Tool". Artificial Intelligence Index. Stanford University.
- ↑ Gold, Ashley (June 27, 2023). "Exclusive: Immigrants play outsize role in the AI game". Axios. Retrieved December 12, 2023.
- ↑ Ellis, Lindsay (October 23, 2023). "Dropping Out of College to Join the AI Gold Rush". WSJ. Retrieved December 12, 2023.
- ↑ Mozur, Paul; Metz, Cade (June 9, 2020). "A U.S. Secret Weapon in A.I.: Chinese Talent". The New York Times.
- ↑ "What Is Artificial Intelligence (AI)?". Council on Foreign Relations.
- ↑ "Lauren A. Kahn | Council on Foreign Relations". web.archive.org. January 3, 2022.
- ↑ Kahn, Lauren (October 28, 2021). "U.S. Leadership in Artificial Intelligence Is Still Possible". Council on Foreign Relations.
- ↑ Batin, Michael; Turchin, Alexey; Markov, Sergey; Zhila, Alisa; Denkenberger, David (December 1, 2017). "Artificial intelligence in life extension: From deep learning to superintelligence". Informatica.
- 1 2 Robert F. Service, 'The game has changed.' AI triumphs at solving protein structures, Science, 30 November 2020
- 1 2 Callaway, Ewen (November 30, 2020). "'It will change everything': DeepMind's AI makes gigantic leap in solving protein structures". Nature. 588 (7837): 203–204. Bibcode:2020Natur.588..203C. doi:10.1038/d41586-020-03348-4. PMID 33257889. S2CID 227243204.
- ↑ "AlphaFold: a solution to a 50-year-old grand challenge in biology". Deepmind. Retrieved November 30, 2020.
- ↑ Brigitte Nerlich, Protein folding and science communication: Between hype and humility, University of Nottingham blog, 4 December 2020
- ↑ Tim Hubbard, The secret of life, part 2: the solution of the protein folding problem., medium.com, 30 November 2020
- 1 2 "DeepMind's protein-folding AI has solved a 50-year-old grand challenge of biology". MIT Technology Review. Retrieved November 30, 2020.
- ↑ Vincent, James (December 20, 2016). "Artificial intelligence is going to make it easier than ever to fake images and video". The Verge.
- ↑ Coldewey, Devin (January 5, 2021). "OpenAI's DALL-E creates plausible images of literally anything you ask it to". TechCrunch. Archived from the original on January 6, 2021. Retrieved March 15, 2023.
- ↑ Coldewey, Devin (April 6, 2022). "New OpenAI tool draws anything, bigger and better than ever". TechCrunch. Archived from the original on May 6, 2023. Retrieved March 15, 2023.
- ↑ "We're officially moving to open-beta". twitter.com. Archived from the original on December 28, 2023.
- ↑ "Stable Diffusion Public Release". Stability AI. Archived from the original on August 30, 2022. Retrieved March 15, 2023.
- ↑ "ModelScope 魔搭社区". modelscope.cn. Archived from the original on May 9, 2023. Retrieved March 20, 2023.
- ↑ kumar, Ashish (October 3, 2022). "Meta AI Introduces 'Make-A-Video': An Artificial Intelligence System That Generates Videos From Text". MarkTechPost. Archived from the original on December 1, 2022. Retrieved March 15, 2023.
- ↑ Edwards, Benj (October 5, 2022). "Google's newest AI generator creates HD video from text prompts". Ars Technica. Archived from the original on February 7, 2023. Retrieved October 25, 2022.
- ↑ "Phenaki". phenaki.video. Archived from the original on October 7, 2022. Retrieved October 3, 2022.
- ↑ Edwards, Benj (September 9, 2022). "Runway teases AI-powered text-to-video editing using written prompts". Ars Technica. Archived from the original on January 27, 2023. Retrieved September 12, 2022.
- ↑ Sagar, Ram (June 3, 2020). "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. Archived from the original on August 4, 2020. Retrieved March 15, 2023.
- ↑ Lock, Samantha (December 5, 2022). "What is AI chatbot phenomenon ChatGPT and could it replace humans?". The Guardian. ISSN 0261-3077. Archived from the original on January 16, 2023. Retrieved March 15, 2023.
- ↑ Lardinois, Frederic (March 14, 2023). "Microsoft's new Bing was using GPT-4 all along". TechCrunch. Archived from the original on March 15, 2023. Retrieved March 15, 2023.
- ↑ Derico, Ben; Kleinman, Zoe (March 14, 2023). "OpenAI announces ChatGPT successor GPT-4". BBC News. Archived from the original on May 15, 2023. Retrieved March 15, 2023.
- ↑ Ziegener, Daniel (January 17, 2023). "DeepL Write: Brauchen wir jetzt noch eine menschliche Lektorin?". Golem.de. Archived from the original on February 6, 2023. Retrieved March 15, 2023.
- ↑ "Gemini". DeepMind. Retrieved December 8, 2023.
- ↑ Robertson, Adi (September 9, 2016). "Google's DeepMind AI fakes some of the most realistic human voices yet". The Verge.
- ↑ Zwiezen, Zack (January 18, 2021). "Website Lets You Make GLaDOS Say Whatever You Want". Kotaku. Archived from the original on January 17, 2021. Retrieved January 18, 2021.
- ↑ Ruppert, Liana (January 18, 2021). "Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App". Game Informer. Game Informer. Archived from the original on January 18, 2021. Retrieved January 18, 2021.
- ↑ Jorge Jimenez (January 31, 2023). "AI company promises changes after 'voice cloning' tool used to make celebrities say awful things". PC Gamer. Archived from the original on April 4, 2023. Retrieved February 3, 2023.
- ↑ Chopra, Anuj; Salem, Saladin (February 1, 2023). "Seeing is believing? Global scramble to tackle deepfakes". Yahoo News. Archived from the original on February 3, 2023. Retrieved March 15, 2023.
- ↑ Coscarelli, Joe (April 19, 2023). "An A.I. Hit of Fake 'Drake' and 'The Weeknd' Rattles the Music World". The New York Times. ISSN 0362-4331. Archived from the original on May 15, 2023. Retrieved May 16, 2023.
- ↑ Roose, Kevin (December 10, 2023). "This A.I. Subculture's Motto: Go, Go, Go". The New York Times. ISSN 0362-4331. Retrieved December 11, 2023.
- ↑ Hammond, George (December 27, 2023). "Big Tech is spending more than VC firms on AI startups". Ars Technica. Archived from the original on January 10, 2024.
- ↑ Wong, Matteo (October 24, 2023). "The Future of AI Is GOMA". The Atlantic. Archived from the original on January 5, 2024.
- ↑ "Big tech and the pursuit of AI dominance". The Economist. March 26, 2023. Archived from the original on December 29, 2023.
- ↑ Fung, Brian (December 19, 2023). "Where the battle to dominate AI may be won". CNN Business. Archived from the original on January 13, 2024.
- ↑ Kafka, Peter (February 1, 2023). "The AI boom is here, and so are the lawsuits". Vox. Archived from the original on May 9, 2023. Retrieved March 15, 2023.
- ↑ "The AI boom: lessons from history". The Economist.
- 1 2 CHOW, A.R. et al. (2023) ‘The Ai Arms Race Is Changing Everything’, TIME Magazine, 201(7/8), pp. 50–54.
- 1 2 Anderljung, M., Barnhart, J., Korinek, A., Leung, J., O’Keefe, C., Whittlestone, J., Avin, S., Brundage, M., Bullock, J., Cass-Beggs, D., Chang, B., Collins, T., Fist, T., Hadfield, G., Hayes, A., Ho, L., Hooker, S., Horvitz, E., Kolt, N., … Wolf, K. (2023). Frontier AI Regulation: Managing Emerging Risks to Public Safety.
- 1 2 3 Scharre, Paul. "Killer Apps." Foreign Affairs, 16 April 2019, https://www.foreignaffairs.com/articles/2019-04-16/killer-apps. Accessed 30 November 2023.
- ↑ Eapen, Tojin T.; Finkenstadt, Daniel J.; Folk, Josh; Venkataswamy, Lokesh (June 16, 2023). "How Generative AI Can Augment Human Creativity". Harvard Business Review. ISSN 0017-8012. Retrieved June 20, 2023.
- ↑ McKendrick, Joe (May 17, 2020). "No matter how sophisticated, artificial intelligence systems still need human oversight". ZDNET. Archived from the original on May 10, 2023. Retrieved May 16, 2023.
- ↑ Sukhadeve, Ashish (February 9, 2021). "Council Post: Artificial Intelligence For Good: How AI Is Helping Humanity". Forbes. Archived from the original on May 9, 2023. Retrieved May 16, 2023.
- ↑ "Could AI advancements be a threat to your job security?". Learning People. Archived from the original on May 9, 2023. Retrieved May 16, 2023.
- ↑ Zinkula, Jacob; Mok, Aaron (June 4, 2023). "ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace". Business Insider. Archived from the original on May 9, 2023. Retrieved May 16, 2023.
- 1 2 3 4 Karnofsky, Holden (February 20, 2023). "What AI companies can do today to help with the most important century". Cold Takes. Archived from the original on January 5, 2024. Retrieved December 8, 2023.
- ↑ Aschenbrenner, Leopold (March 29, 2023). "Nobody's on the ball on AGI alignment". LessWrong. For Our Posterity. Archived from the original on September 26, 2023. Retrieved December 8, 2023.
- 1 2 3 "Avoiding Extreme Global Vulnerability as a Core AI Governance Problem". AI Safety Fundamentals. BlueDot Impact. November 8, 2022. Archived from the original on January 16, 2024. Retrieved December 8, 2023.
- ↑ "Experts react: The EU made a deal on AI rules. But can regulators move at the speed of tech?". Atlantic Council. December 11, 2023.