```
Generative artificial intelligence is primarily fueled by the data we upload to the web. This processing of large volumes of data is referred to as "Big Data." Therefore, we can say that there is no artificial intelligence without Big Data. Data processing, meaning its structuring and management, has given rise to new professions like data scientist. Many other professions also contribute to the field of artificial intelligence, such as linguists, who play a role in developing natural language models and help the algorithm "understand" human natural language better, though not semantically, but merely by adhering to the programming instructions set by those who input the data, known as data entry. In any data input process into an artificial intelligence system, it is crucial to work with transdisciplinary teams, including professionals from social sciences and individuals from various cultures and ethnicities. This is to ensure the multicultural aspect of the algorithm's training and fundamentally avoid what are called "training biases."
In the context of database formation used to train artificial intelligence, there is a phrase that can help us understand how biases originate. Data scientists often repeat this phrase: "Trash in, trash out." This means if garbage is entered into the database being formed, garbage will come out. This is why it is essential to ensure that the data entered is not partial, false, or biased.
Moreover, the algorithm does not think but follows the instructions given by the programming it executes. Think of a more basic level, such as a coffee vending machine or a washing machine, which follows a series of instructions and automates the set of commands in its programming. Something similar happens in the realm of what is known as weak and white-box artificial intelligence, meaning where we can follow all the programming logic, and it is explainable in nature.
However, in recent months we have witnessed the advancement of what is called "generative" artificial intelligence. It is named this way because the algorithm is of a more advanced quality and can "create" or "generate" content for humans, emulating human thought. For this, it needs a triggering question or prompt that will be entered by the human consulting it. This prompt is called a "prompt"—the better the quality of these prompts, the better the response the AI model will provide. The origin of the algorithm used by Chat GPT-4 or the image creation system DALL-E are clear examples of these techniques.
Finally, search engines like Google or Bing have integrated generative artificial intelligence tools into their systems. The latest projects that have appeared in the market, besides the 4.0 version of Chat GPT from OPENAI, are GEMINI (Google) and COPILOT (Microsoft), and they are causing a revolution in their application to various professions and the production of content required in these different fields. One of the main challenges will be to adapt traditional education to the use of these tools. The emergence of this kind of Natural Language models implies a paradigm shift in the educational sphere. Classroom dynamics will necessarily need to be reviewed by teaching teams, who will be able to turn these tools into allies to foster critical thinking instead of replacing the learning process with their use. We will continue analyzing all these scenarios to keep sharing updates on their use and influence in different fields.
```
Comments