Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

To build state-of-the-art AI systems, you need a massive trove of data and a huge amount of computing power. Specifically, Generative AI tools, like ChatGPT are systems that are designed to leverage the power of this tech, and it’s already capable of impacting how people live and work.

According to statistics, the increasing usage of AI in various end-use industries is expected to fuel the growth of the artificial intelligence market by almost 40% from 2023 to 2032.

In line with this, developing well-arranged algorithms has helped the AI market reach greater heights in 2022 by observing large amounts of data, making predictions, and recognizing patterns – with ChatGPT among its breakthroughs.

North America dominated the global artificial intelligence market, with a revenue share of more than 50% in 2022. The 2022 Global AI Index supports this as the United States is ranked first in artificial intelligence (AI) based on talent, R&D and commercial applications. Indeed, the dynamic R&D processes in the region brought forth many advanced and innovative technologies.

In fact, the United States covers the biggest revenue share in the North American region, with the presence of major AI-driven companies like Google and Microsoft.

Nowadays, the topic of artificial intelligence cannot be missed. In its current form, limitations, advantages and risks, the tech is already transforming the way information is generated and presented; may it be in the form of text or image.

As we immerse deeper in the age of generative AI, we get to understand how consumers and businesses can utilize artificial intelligence and its ability to perform the cognitive functions we usually associate with human minds.

The Rise of ChatGPT and Its Risks

Developed by the AI research laboratory OpenAI, Chat Generative Pre-Trained Transformer (ChatGPT) is an AI-powered platform designed for conversational AI systems like virtual assistants (VAs) and chatbots.

ChatGPT uses a very large and sophisticated GPT language model to generate human-like responses in text format. In other words, the responses ChatGPT generates are based on the data it was trained on.

Interestingly, you can ask ChatGPT to write and debug codes, translate texts, summarize a document, formulate a recipe and even create content in various tones, depending on how specific the prompt is. 

As per the latest available data at the time of writing, ChatGPT has over 100 million users, and the website generates 1 billion visitors per month. This user and traffic growth were achieved in only a span of two months (from December 2022 to February 2023).

This form of Generative AI is a testimony to an incredible advancement, which is why it captured the attention of researchers, businesses and the public. We expect that as these models become more powerful and sophisticated, it is crucial to understand their lifecycle and the challenges and opportunities they present.

At the beginning of February 2023, a paid subscription version called ChatGPT Plus launched which introduced GPT-4, OpenAI's most advanced system, producing safer and more useful responses. GPT-4 is more sophisticated and capable of learning than GPT-3, reducing the number of hallucinations produced by the chatbot.

The bigger development that will come later on is how ChatGPT continues to evolve and be integrated into other applications and use cases. Microsoft reportedly made a multibillion-dollar investment in ChatGPT, with a notable integration called 365 Copilot, which integrates ChatGPT natural language prompts directly into Office apps like Word, PowerPoint, Outlook and more.

However, generative AI, just like any other emerging technologies, is not without its risks. Having said that, it is important to discern that generative-AI models will have tendencies to produce inaccurate or biased results, without any indication that its outputs may be problematic.

This is the reason why a lot of discussion with regard to regulating AI, particularly ChatGPT, has risen. Before turning to generative AI as a business solution, the limitations of these AI models should be addressed and overcome accordingly.

Considering what worked previously in the industrial era to protect consumers, competition, and national security, improved specialized expertise is required to understand not just how AI technology works, but also the social, economic, and security effects that come out of it. Determining accountability for those effects while encouraging continued development influence how ChatGPT – a distinct form of innovation – plays out with a certain level of responsibility.

Yet, stopping or slowing AI development could have detrimental effects with the increasing demand for digital natives. Hence, a new regulatory paradigm could work in congruence with the identification and quantification of risks; behavioral codes; and enforcement.

In reality, ChatGPT does not "think" like us; it uses data from the internet to generate a response, and as convincing as it may sound, there is the potential to generate content that could deceive an individual into believing something that is not 100% proven or precise. This may lead to the weaponization of such technology in the overall tech world.

Other examples of cybersecurity scenarios related to ChatGPT include phishing emails. ChatGPT could make the task significantly easier and more believable, and could even open doors for simple exploit code generation.

Furthermore, since the ChatGPT model is open-source, hackers could conveniently create a dataset of existing company-generated emails to produce phishing emails. Not to forget that ChatGPT can help attackers better create a fake identity, making their attacks more likely to succeed.

In response, recently, the Biden administration has released a “Blueprint for an AI Bill of Rights,” which pushes the oversight to ensure that OpenAI and other companies launching generative AI products are regularly reviewing their security features.

With this in mind, new AI models should require a threshold of minimum-security measures before an AI is open-sourced, such as the case of Bing launching their own generative AI in early March, as well as Meta finalizing a powerful tool of their own.

Additionally, ChatGPT lacks encryption, strict access control, and access logs that opens up a large avenue for security threats.

Generative AI Evolution

By simple definition, generative AI refers to a subset of artificial intelligence that involves training machines to generate data such as images, music, text or even videos. From existing data sets, this type of technology can produce entirely new content and build upon that.

A generative AI system is developed with a technique called machine learning, which involves teaching an AI to perform tasks by exposing it to lots of data, training on it and eventually learning to copy it. OpenAI’s GPT-3, for example, was trained on around 45 terabytes of text — an enormous quantity of information from the internet, along with scripts of dialogue, to imitate human-like conversations — at an estimated cost of million dollars.

All of this is made possible by training neural networks on enormous volumes of data and applying attention mechanisms, a technique that helps a generative AI system to identify the context of a user’s prompt through word patterns and relationships.

The lifecycle of generative AI models includes data collection, pre-processing, model training, fine-tuning, deployment and maintenance. What is more, the future of generative AI promises even more exciting developments and possibilities in consumer behavior, as well as new enterprise revenue and productivity enablers.

The logic behind Generative AI is that currently, the free version gives the companies more data for advantage, and later on, AI developers will eventually sell and license their technology to monetize.

Generative AI is going mainstream rapidly, and at the same time, governments and regulators who try to rein in this tech are still learning how it works and the limitations that should be imposed.

The stakes are high because, like other breakthrough technologies, generative AI could change how the world operates, both for the better and for the worse. Decisions being made now could have ripple effects. To cite an example, some schools in the US have banned the use of ChatGPT due to plagiarism, but others are still determining whether to incorporate AI into their curriculums or prevent it to be a form of cheating.

While different types of individuals have already been immersed in the generative AI evolution, influencing how kids will relate to these technologies in their professional lives is a more complex process.

As generative AI continues to mature, bigger investments in hardware, computing power, data storage and bandwidth are a must. And obviously, companies with large, unique and high-quality data sets and financial resources will be more capable to optimize their models.

Having said that, developers will be critical to this next wave of innovation and AI coding tools will be a key area of innovation and over time, AI-powered coding will make coders more efficient.

Morgan Stanley Research analysts think that the current excitement could be more than just hype: generative AI is a serious contender, considering its real market impact potential.

AI models emerge for specific industries such as retail, finance and healthcare, they will be under pressure to expand both their technological and human resources.

AI Use Cases

As ChatGPT dazzles the general public with fanciful uses of artificial intelligence, such as writing Hollywood scripts or opining on fantasy baseball and art theory, Walmart Inc. is leaning on AI for a more pragmatic purpose: bargaining with suppliers.

The retail giant uses a chatbot developed by Mountain View, California-based Pactum AI Inc., whose software helps large companies automate vendor negotiations. Walmart tells the software its budgets and needs. Then the AI, rather than a buying team, communicates with human sellers to close each deal.

“We set the requirements and then, at the end, it tells us the outcome,” says Darren Carithers, Walmart’s senior vice president for international operations.

Carithers says Pactum’s software—which Walmart so far is using only for equipment such as shopping carts, rather than for goods sold in its stores—has cut the negotiating time for each supplier deal to days, down from weeks or months when handled solely by the chain’s flesh-and-blood staffers. The AI system has shown positive results, he says. Walmart said it’s successfully reached deals with about 68% of suppliers approached, with an average savings of 3% on contracts handled via computer since introducing the program in early 2021.

Walmart was Pactum’s first customer and one of the few major retailers in the US to adopt AI in its vendor negotiations at all. Like Walmart, Amazon.com Inc. has dedicated account managers for category-leading brands like Nestlé SA and Procter & Gamble Co., but it automates other types of vendor discussions, according to Martin Heubel, a former Amazon executive who now advises brands selling goods on the site. Rival Target Corp. says it doesn’t use AI for supplier negotiations.

“The huge potential is that any kind of company can soon use AI for a problem that normally requires an entire procurement team to handle,” says Tim Baarslag, a senior researcher at CWI, the National Research Institute for Mathematics and Computer Science in the Netherlands. Negotiating used to be a human-only skill, he says, but now AI is just as capable.

Pactum’s software is just one of several AI tools the world’s largest retailer has adopted in recent years as it seeks new ways to save its corporate team and customers time and money. Walmart announced a partnership with Microsoft Corp. in 2018 to work on artificial intelligence and other strategic tech and has been using AI developed by Microsoft-backed OpenAI to offer conversational text-to-shop tools, which the retailer touted in December. A consumer-facing chatbot, which can provide information such as the status of orders or returns, is now used by more than 50 million customers, Chief Executive Officer Doug McMillon said in a letter to shareholders in April.

Walmart—which has more than 100,000 total suppliers—started using Pactum with a pilot for its Canadian unit. The project then expanded to the US, Chile and South Africa. Pactum’s other clients include shipping company A.P. Moller-Maersk A/S and electrical-products vendor Wesco International Inc., among others, according to its website.

Artificial intelligence isn’t a threat to Walmart’s human negotiators, at least not yet. Instead the company is using the tool to squeeze savings from contracts that might not be big enough to justify taking up much—if any—of a procurement manager’s time. Pactum’s software can haggle over a wide range of sticking points, including discounts, payment terms and prices for individual products.

When a vendor says it wants to charge more for an item, Pactum’s system compares the request with historical trends, what competitors are estimated to pay and even fluctuations in key commodities that go into making the item, among other factors. It then tells Walmart the highest price it thinks its buyers should accept, a figure that a human procurement officer can modify if needed.

Then the real negotiation starts. Pactum’s chatbot communicates with a flesh-and-blood vendor on the other side, displaying a series of arguments and proposals the supplier can accept or reject.

“There’s so much data, so much back and forth, and so many variables that can be tweaked,” says Pactum CEO Martin Rand. “The AI bot with a human on the other side will find a better combination than two people can over email or on the phone.”

Suppliers cede profit in at least some of the negotiations, but Pactum says they can get concessions such as better payment terms and longer contracts in return.

Three out of four suppliers that have tested the program told Walmart they preferred negotiating with the AI over a human, the retailer says, though a small percentage said they would’ve liked to negotiate with a person. “Some really like it and are like, ‘This is the best way to do it,’” says Carithers, the Walmart executive. “But I would relate that to people using self-checkout in stores. Some customers love it, but guess what: Some customers want to go to a manned checkout and see a person.”

Where AI Can Lead Us

Experts are foreseeing the dawn of Artificial General Intelligence (AGI), a form of AI capable of understanding or learning any intellectual task that a human being can do. It’s difficult to predict the exact timeline but the progress we’re witnessing in tools like GPT-4 suggests that the future is bright.

The potential of AGI looms on the horizon, and it could take the world of AI to greater heights. As we continue to embrace these technological breakthroughs, the possibilities for innovation become infinite.

What is seen as an example of this (built on GPT-4 and causing a stir in the industry) is Auto-GPT, an open-source Python application. Making AI autonomous. It can be given a set of goals, and then take the necessary steps towards accomplishing that goal across the internet such as connecting applications and software.

According to Harvard Business Review, we must reimagine what the foundational base for AI — especially open-sourced examples like ChatGPT — looks like to ensure that exchanges are safe and ethical. It is critical that we apply these principles to generative AI.

The White House’s AI Blueprint spelled out core principles to govern the effective development of AI systems. Among these are protecting users from unsafe or ineffective systems, discrimination by algorithms as well as abusive data practices and comprehending how and why an automated system contributes to outcomes that impact them.

The National Telecommunications and Information Administration (NTIA), a Commerce Department agency that advises the White House on telecommunications and information policy, wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

“Responsible AI systems could bring enormous benefits…and for these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator.

While generative AI technology and its supporting ecosystems are still evolving, it is already quite clear that it will offer the most significant value-creation opportunities. Those who can harness the data in fine-tuning foundation models for their applications can expect to achieve the greatest differentiation and competitive advantage.