The giant leap in natural language processing and massive computing.
The giant leap in natural language processing and massive computing
June 22, 2021
By Abhishek Kodi, Zakie Twainy
Learn More [link to accelerator program page]
Sign-up for The Quarterly Update
With the global data sphere estimated to reach 175 zettabytes1 by 2025, business users will need to rethink how they analyze businesses and sectors, including the mental and time capacity for reading, comprehending and scrutinizing data.2
Unstructured data has become known as the blind spot of business insight. However, as recent estimates show, 80% of business data is unstructured, creating a critical need for tools to help business users better understand and extract insights from this type of data.3
Businesses that are able to leverage automation to critically comprehend unstructured data will have significant competitive advantages, and recent inventions in the field of natural language processing (NLP) will have a massive impact on this challenge area. For example, a portfolio manager may have thousands of pieces of news to read about their portfolio over the course of a month. A machine learning solution would help the portfolio manager identify which stories are most relevant to their needs, and based on nuanced phrasing, give the most important news articles the highest priority.4
Similarly, a financial institution that harnesses unstructured data could leverage machine learning to better understand customer behaviors. By analyzing millions of client emails or phone calls, companies may be able to detect specific client actions, such as if a client is likely to default or close their account, and implement proactive measures to retain the business.
To attack the problems of unstructured data, artificial intelligence (AI) experts are trying to achieve artificial general intelligence (AGI), or a machine capable of understanding the world as well as any human, with the same capacity to learn how to execute a huge range of tasks. The real power is the combination of human-like reasoning matched with the computational power of a machine.5
Imagine if you could hire a financial analyst who could easily access, read, and extract insights from every financial statement that ever existed for a company in seconds. Today this is an unrealized dream, but it is inching closer to the present with some important accomplishments.
Among the key challenges for computers to achieve AGI is to have the language capacity to understand the written word. Much like teaching a child your native language so you can later explain how to navigate the world and complete everyday tasks, so too must we teach computers these language skills so that later, we can teach them to read and understand financial statements. The field of NLP enables computers to understand, interpret, and manipulate natural human language – and in the last twelve months, there was an important evolution in this world with the introduction of new and more powerful language models.
Over the last ten years, deep neural networks have become more powerful and commonplace, allowing for larger language models and faster processing of these models. While you can compare natural language models on many different attributes, one of the main model attributes that correlate to its robustness are parameters.6 Parameters are the key to machine learning algorithms in that they are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication is a good rough rule of thumb.7
In the last five years we’ve seen steady improvements in language models, but in the last 12 months, the sophistication of language models has leap-frogged, increasing by over 814% percent since Q2 2020 and 99,900% as compared to less than 6 months before that (Figure 1).
Note: The data for OpenAI’s GPT-1 is from Sanh, Victor, et al. “DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter.” ArXiv.org, 1 Mar. 2020, arxiv.org/abs/1910.01108. The data for Google’s BERT is from Anderson, Dawn. “A Deep Dive into BERT: How BERT Launched a Rocket into Natural Language Understanding.” Search Engine Land, 7 Nov. 2019, searchengineland.com/a-deep-dive-into-bert-how-bert-launched-a-rocket-into-natural-language-understanding-324522. The data for Salesforce’s CNTRL is from Keskar, Nitish Shirish, et al. “CTRL: A Conditional Transformer Language Model for Controllable Generation.” ArXiv.org, 20 Sept. 2019, arxiv.org/abs/1909.05858. The data for OpenAI’s GPT-2 is from Radford, Alec. “Better Language Models and Their Implications.” OpenAI, OpenAI, 3 May 2021, openai.com/blog/better-language-models/#fn1. The data for OpenAI’s GPT-3 is from Brown, Tom B., et al. “Language Models Are Few-Shot Learners.” ArXiv.org, 22 July 2020, arxiv.org/abs/2005.14165. The data for Google’s T5 is from Wiggers, Kyle. “Google Trained a Trillion-Parameter AI Language Model.” VentureBeat, VentureBeat, 14 Jan. 2021, venturebeat.com/2021/01/12/google-trained-a-trillion-parameter-ai-language-model/.
You’ll see that OpenAI’s GPT-3 and Google’s T5 have far surpassed previous models in terms of parameters, but it’s more than that. Most models are trained to complete specific tasks, such as the “go” playing machine AlphaGO that outperformed the human world champion at the game of go, but can’t play tic-tac-toe or checkers (despite these games being much simpler). Open-AI’s GPT-3, by contrast, can do many different tasks with no additional training or fine-tuning.8
Newer language models can perform business applications, such as answering questions via digital assistants, summarizing data, translation, and performing sentiment analysis, more accurately than ever before.
While there is still more work to be done, these advancements serve as a glimpse for what could be possible tomorrow. As quantum computing advances and models continue to become more sophisticated, the AGI many dream of could become a reality.
However, there are many challenges to the realization of NLP that will help achieve AGI.
Time and resources: Running these models takes significant time and money to execute. Lambda Labs estimated that it would take a minimum of 355 years and $4.6 million dollars to make a single training run on the lowest-priced GPU cloud on the market.11
Centralized build, launch and execution: Since running these models is so resource intensive, it centralizes their creation. While there are hundreds of companies globally creating NLP technology,12 there are only a handful of organizations that have created the most powerful language models, such as Google, OpenAI, Facebook, Alibaba, Baidu, Microsoft and Salesforce.
Potential for nefarious use: Currently there is a scientific community that shares research and collaborates on a global scale. However, as NLP and other AI technologies become more powerful, governments may get involved to prevent foreign actors from using them in nefarious ways.
Embedded biases: Due to the fact that only a handful of organizations have the resources, most powerful NLP technology is centralized at only a handful of organizations in select countries, which may leave out many other geographic points of view.
Notably, at Big Compute 20 Tech Conference on February 11, 2020, when Sam Altman, CEO of OpenAI, was asked about the problem of diversity in the NLP industry, he said “[It’s] huge,” and continued, “the people who build these things, not through any intentional fault but just for the way it works, put a huge amount of themselves and their own views of the world in to these systems, so we gotta have more diverse input.”13
The speed at which NLP language models are growing in sophistication can have huge impacts on society at large. For example, if we can teach computers how to code just by providing written instructions, what does this mean for the security of developers’ jobs, or technology companies everywhere? If we can harness the unstructured data that has been previously untapped, what does it mean about the decisions we will make as executives or investors? NLP language models are a key to the lock on AGI, and carefully monitoring these models and their effects is critical for having a more educated view on technology and the path of business moving forward.
1 One zettabyte is approximately equal to one thousand exabytes or one billion terabytes.
2 Coughlin, Tom. “175 Zettabytes By 2025.” Forbes, Forbes Magazine, 29 Nov. 2018, www.forbes.com/sites/tomcoughlin/2018/11/27/175-zettabytes-by-2025/?sh=5d98dd665459.
3 Rogers, Adam. “Council Post: The 80% Blind Spot: Are You Ignoring Unstructured Organizational Data?” Forbes, Forbes Magazine, 29 Jan. 2019, www.forbes.com/sites/forbestechcouncil/2019/01/29/the-80-blind-spot-are-you-ignoring-unstructured-organizational-data/.
4 Gutierrez, Daniel. “How Financial Institutions Can Deal with Unstructured Data Overload.” InsideBIGDATA, 16 Mar. 2021, insidebigdata.com/2021/03/15/how-financial-institutions-can-deal-with-unstructured-data-overload/.
5 Heath, Nick. “What Is Artificial General Intelligence?” ZDNet, ZDNet, 22 Aug. 2018, www.zdnet.com/article/what-is-artificial-general-intelligence/.
6 Yao, Mariya. “10 Leading Language Models For NLP In 2021.” TOPBOTS, 21 May 2021, www.topbots.com/leading-nlp-language-models-2020/.
7 Wiggers, Kyle. “Google Trained a Trillion-Parameter AI Language Model.” VentureBeat, VentureBeat, 14 Jan. 2021, venturebeat.com/2021/01/12/google-trained-a-trillion-parameter-ai-language-model/.
8 Lauret, Julien. “GPT-3: The First Artificial General Intelligence?” Medium, Towards Data Science, 25 July 2020, towardsdatascience.com/gpt-3-the-first-artificial-general-intelligence-b8d9b38557a1.
9 “Towards Interpreting Solidity Smart Contract: An Automatic and Practical Realization.” IEEE Transactions on Services Computing, vol. 9, no. 1, 2016, pp. 1–11., doi:10.1109/tsc.2015.2512393.
10 Heaven, Will Douglas. “OpenAI's New Language Generator GPT-3 Is Shockingly Good-and Completely Mindless.” MIT Technology Review, MIT Technology Review, 10 Dec. 2020, www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/.
11 Li, Chuan. “OpenAI's GPT-3 Language Model: A Technical Overview.” Lambda Blog, Lambda Blog, 11 Sept. 2020, lambdalabs.com/blog/demystifying-gpt-3/.
12 Crunchbase has a listing of 1,275 active companies who self-describe as being in the NLP sector as of 5/3/2021.
13 https://www.youtube.com/watch?v=0TRtSk-ufu0, 40 min. OpenAI has a mentorship program, OpenAI Scholars, for 6-10 scholars to address the issue
BNY Mellon is the corporate brand of The Bank of New York Mellon Corporation and may be used to reference the corporation as a whole and/or its various subsidiaries generally. This material does not constitute a recommendation by BNY Mellon of any kind. The information herein is not intended to provide tax, legal, investment, accounting, financial or other professional advice on any matter, and should not be used or relied upon as such. The views expressed within this material are those of the contributors and not necessarily those of BNY Mellon. BNY Mellon has not independently verified the information contained in this material and makes no representation as to the accuracy, completeness, timeliness, merchantability or fitness for a specific purpose of the information provided in this material. BNY Mellon assumes no direct or consequential liability for any errors in or reliance upon this material.
BNY Mellon will not be responsible for updating any information contained within this material and opinions and information contained herein are subject to change without notice.
BNY Mellon assumes no direct or consequential liability for any errors in or reliance upon this material. This material may not be reproduced or disseminated in any form without the prior written permission of BNY Mellon. Trademarks, logos and other intellectual property marks belong to their respective owners.
© 2021 The Bank of New York Mellon Corporation. All rights reserved.