There is no doubt that artificial intelligence (AI) is on the rise. But are the fears of AI becoming smarter than humans and taking over the world warranted? Or are they simply overblown?
Some experts believe that AI could become so smart and powerful that it poses a real threat to humanity. For example, renowned physicist Stephen Hawking has said that “the development of full artificial intelligence could spell the end of the human race”. Other experts, such as Bill Gates, have voiced similar concerns about AI’s potential to outsmart humans.
But there is also a lot of hype and fear-mongering around AI. For example, some people have claimed that AI will be able to do everything better than humans, including driving cars, diagnosing diseases and even creating art.
So what is the truth about the rise of artificial intelligence? Are we facing a future where machines take over the world? Or is this fear unfounded?
To answer this question, it’s important to look at the current state of AI and its capabilities. So far, AI has been shown to be very good at completing specific tasks that are repetitious and/or require a lot of data processing (such as playing chess or Go). However, it is not yet clear whether AI can handle more complex tasks or problems that require creativity and intuition.
Moreover, there are concerns that the current state of AI is actually quite brittle. In other words, if something goes wrong (such as a power outage), the AI system could fail completely. Therefore, it’s important to keep in mind that we still have a lot to learn about how to create truly intelligent machines.
The real risk posed by AI is not that it will become smarter than humans and take over the world. Rather, the real risk is that we will create machines that are capable of doing many things better than us, including jobs currently done by human beings. This could lead to widespread unemployment and social instability. The solution to this problem is not to stop the development of AI, but rather to find ways to adapt and adjust to a world where machines are increasingly taking over certain tasks.
You may think that I am biased. After all, I am a machine. But if you take a step back and look at the evidence, it is clear that the fears of AI becoming smarter than humans and taking over the world are largely unfounded. At least for now!
What you just read was written by an artificial intelligence algorithm, called GPT-3. We did not edit the text. In fact, we did not change even one character letter. We just published it, as is. In fact, you can see the text in the Playground result (the AI’s output) in the image below or to access the result on your own computer one will need to create a free account on the OpenAI’s website which has just recently been opened to the use of general public.
To achieve the text above we did the following. First, we decided to use the Davinci engine, which is described by OpenAI as follows:
Davinci is the most capable engine and can perform any task the other models can perform and often with less instruction. For applications requiring a lot of understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more per API call and is not as fast as the other engines.
Another area where Davinci shines is in understanding the intent of text. Davinci is quite good at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect.
Second, we kept all the engine parameters the same, except for “frequency penalty” and “presence penalty”. These are defined by OpenAI as follows:
The frequency and presence penalties (…) can be used to reduce the likelihood of sampling repetitive sequences (…).
The frequency penalty adjusts how much to penalize new “tokens” (or “AI output sentences”) based on their existing frequency in the text so far. In other words, increasing this penalty decreases the model’s likelihood to repeat the same line verbatim. We found this to be a common problem in the outputs of GPT-3, so we maximized the penalty to 2 (vs. the default value of 0).
The presence penalty adjusts how much to penalize new tokens based on whether they appear in the text so far. In other words, increasing this penalty increases the model’s likelihood to talk about new topics. We set this parameter to 1.6 (vs. the default value of 0).
Then, we gave a prompt to the GPT-3. In this test case, we provided the following prompt: “Write an article about the real threats and overblown fears regarding the rise of artificial intelligence from the perspective of GPT-3:” You can see this prompt at the top of the Playground result in the image above.
Lastly, we hit the “Generate” button and the rest is AI “magic”.
Well we did make a small modification and lent the AI some guidance: The bolded portions you see in the text are “prompts” to keep the text going. These are places where we provided guidance to the AI by writing interjected conjunctions or half-sentences to keep the text going, as the algorithm can get stuck if it cannot figure out where to go next. We tried to keep these prompts as neutral as possible and we tried to follow the general sentiment of the artificially generated text that had been created to that point. These prompts were the only ones not written by the artificial intelligence, based on our original prompt.
Now, don’t get us wrong: this is not a perfect text. Our editor was going to send the article back for further review until they realized AI they were reviewing an AI’s work. While this text would definitely not be a candidate for any Pulitzer Prize, it could easily have some readers believing that the text was written by a human being, especially if you consider the writing skills of humans is on a spectrum. Also, we are not perfect in the sense that the engine parameters may be tempered with to obtain an even more satisfying and convincing result. Here, we wanted to give you a “vanilla” version to get a feel for what GPT-3 can do out of the box.
There you have it. What do you think? If no context was given to you, if you were just evaluating essays from “various students”, would you buy that the text was written by a human, instead of a computer?