Tech

AI: a threat or an undeveloped technology

The technology of quick and effortless collection of information has changed the method of intellectual work of the students and, unfortunately, not for the better. For them, Wikipedia has become an obscured land for copying. The ability to insert whole text passages into their work in just a few clicks is so rewarding that this handy way deters practically no student. Many schools protect themselves from this theft of ideas. They now accept homework only in digital format, and check it on a plagiarized computer program. Some schools do not make this critical effort and dispense entirely with writing. Teachers test knowledge by getting students to talk about the topic of the study. This reliably separates the grain from the half.

The technique assumes that the knowledge we gain from facts requires a mental exercise, and it is up to each student to reach it independently. No didactically prepared program can do it for him. Of course, the computer helps you get information quickly. When the geography class talks about the development problems in the Sahel region, the current data of the countries in question can be easily seen on a smartphone. And the following thought process, the evaluation of facts, should occur in educational discourse. Why is one country (e.g., Senegal) coping with critical climatic conditions, and the other (e.g., Nigher) is deprived of development?

If this conversation does not take place, the value of this digital consumption of information is low. The process of reflecting on acquired knowledge is the core of good teaching. Students can easily find help on essay, and that’s fine – at least they develop their research skills. In literary texts, for example, in a novel, you are deeply involved in the natural structure of the work and the biographical and historical context. I know of no program that could simulate the hermeneutic process of interpreting text during a content-based discussion in the classroom. Perhaps this will forever remain utopian.

Recently, most of the world media reported about the new development of the company OpenAI – an algorithm that generates text based on a given context, continues the thought and imitates the tone and style of the given fragment. Despite the widespread practice of AI developers to publish openly accessible code with new developments, the company called OpenAI and the corresponding mission decided to apply its algorithm this time. The official reason: the model is potentially risky because it works so well and generates so overcorrected text that people may use to create fake news and author’s text if it gets into the “wrong” hands.

OpenAI was founded by Elon Musk and Sam Altman in late 2015. Elon Musk is known for his fears about the development of machine learning. In particular, he considers AI to be the biggest threat to the existence of humanity. And paradoxically, he decided to create a company that would gather the best researchers worldwide and becomes one of the most advanced laboratories to develop AI technology. His logic consists in making research as open as possible and thus does not allow anyone to concentrate on all the advanced technology in this area.

Many questions about this approach. For example, philosopher Nick Bostrom commented that “if you have a button that can do something awful to the world, you probably wouldn’t want everyone to have it. And Musk himself recognizes that by trying to develop an individual intellect in the “right” way, you can inadvertently create the very same technologies that are causing fears.”

However, a non-profit organization OpenAI was created with the mission to develop AI technology as openly as possible. Incidentally, Elon Musk left his position on the Board of Directors of this company a year ago while remaining a donor to the organization. The reason is a potential conflict of interest due to the development by Tesla of AI technologies for driverless cars.

Recently OpenAI presented its latest breakthrough, an algorithm for text generation. Like other existing algorithms in this field, it is based on the principle of predicting each next word based on information about the previous words. These algorithms are not programmed traditionally: “if the previous word is so-and-so, then the next word is so-and-so.” The algorithm itself determines which word is the best based on the past text. He does a lot of “learning,” using the great database composed of millions of readers written by people.

Read Also: Easy Tech Solutions to Improve Workplace Communication

However, despite the access to this amount of material, modern algorithms are still not able to generate sufficiently high quality and consistent text. Individual speeches often look pretty meaningful, but the subsequent expression of ideas throughout several addresses, as a rule, is already an impossible task for an individual intellect at the current stage of its development. Here you can think of meaningful dialogues widespread work Sophia, but unfortunately, all of her speeches are pre-programmed. That is, humans prescribed all that she voiced.

Also, does OpenAI offer anything revolutionary new? The proposed model can generate entire paragraphs in a consistent and lengthy text. These results were achieved through the use of a model which, unlike previous approaches, is more extensive, requires more resources to perform calculations, and has access to a more comprehensive database of 45 million online resources. So no cardinally new approaches were presented.

However, other researchers believe that the model is perfect. The demonstration of what results can be achieved by increasing the model and the basis for its “training” is also a significant contribution to the development of this technology. However, this work is not extraordinary and is regarded as a legitimate next step in developing technologies for generating texts with an individual intellect. Was the decision to take this model from the community justified?

Most scientists agree that fears of possible abuse of this technology are unwarranted. Today, many people hire thousands of people to write fake news and comments on social media posts. Still, the automatic generation of such content can significantly increase the amount of disinformation in the public space. And while it is sometimes relatively easy to spot a fake video or photo, reproducing fake texts, if they are well-written, can require a lot of time and resources.

However, the real motives of the company OpenAI for taking over the new technology raises more doubts in the research community. Many people believe that the company wanted to devote more attention to its research. The statement AI poses a severe threat” as nothing else commits the media’s attention.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button