Algorithmic threats to journalism: the logics of digital platforms and artificial intelligence as (non-) references for the production and circulation of news
Amenazas algorítmicas al periodismo: las lógicas de las plataformas digitales y la inteligencia artificial como (no)referencias para la producción y circulación de noticias.
Gabriel Landim de Souza
Federal University of Juiz de Fora
E-mail: gabriellandim@outlook.com
ORCID: https://orcid.org/0000-0001-8797-6599
Iluska Maria da Silva Coutinho
Federal University of Juiz de Fora
E-mail: iluska.coutinho@ufjf.br
ORCID: https://orcid.org/0000-0001-8797-6599
DOI: 10.26807/rp.v29i122.2211
Fecha de envío: 28/02/2025
Fecha de aceptación: 31/03/2024
Fecha de publicación: 05/04/2025
Abstract
In a Platform Society (Helmond, 2015), citizens make decisions and form opinions through digital social networks. The affordances (Gibson, 1977) of platforms have impacted journalism’s production routine. Media companies have started to submit to algorithmic logic (Barros et al., 2021; Gillespie, 2018) and the possibilities allowed by artificial intelligence (Sichman, 2021) to share news. We sought to understand how journalism deals with algorithmic mediation without losing control over its content. We combined bibliographical research with a documentary survey of the editorial principles of Grupo Globo, one of Brazil’s leading media companies, which has updated its use policies of artificial intelligence in journalism. Using Audiovisual Materiality Analysis (Coutinho, 2018), which aims to identify narrative conflicts and productions of meaning, we carried out empirical research with a report broadcast on TV Globo’s Jornal Nacional, which addresses the updating of editorial principles, to understand how the group publicizes the issue. We found that faced with the risk of losing control over the circulation of their content and taking into account the lack of transparency of digital mechanisms, the outlet highlighted the role of the journalist in finalizing texts mediated by algorithms and publicly reinforced this commitment. It is necessary to understand the new forms of threat to the press, veiled by computer codes.
Keywords: algorithms, journalism, platforms, artificial intelligence, threats.
Resumen
En una sociedad de plataformas (Helmond, 2015), los ciudadanos toman decisiones y forman opiniones a través de las redes sociales digitales. Las affordances de las plataformas (Gibson, 1977) han repercutido en la rutina de producción del periodismo. Los medios de comunicación han comenzado a someterse a la lógica algorítmica (Barros et al., 2021; Gillespie, 2018) y a las posibilidades que permite la inteligencia artificial (Sichman, 2021) para compartir noticias. Buscamos entender cómo el periodismo lidia con la mediación algorítmica sin perder el control sobre su contenido. Combinamos la investigación bibliográfica con un estudio documental de los principios editoriales del Grupo Globo, uno de los principales medios de comunicación de Brasil, que ha actualizado sus políticas sobre el uso de la inteligencia artificial en el periodismo. Utilizando el Análisis de Materialidad Audiovisual (Coutinho, 2018), que tiene como objetivo identificar conflictos narrativos y producciones de sentido, realizamos una investigación empírica con un reportaje emitido en el Jornal Nacional de TV Globo, que aborda la actualización de los principios editoriales, para entender cómo el grupo divulga el tema. Ante la falta de transparencia digital y el riesgo de perder el control de la circulación de sus contenidos, el medio enfatizó el papel del periodista en la finalización de los textos mediados por algoritmos y reforzó públicamente este compromiso. Es necesario comprender las amenazas que pesan sobre la prensa veladas por los códigos informáticos.
Palabras Clave: algoritmos, periodismo, plataformas, inteligencia artificial, amenazas.
1. Introduction
In an increasingly platformized society (Helmond, 2015), citizens make decisions and form opinions through digital platforms. The mediatization of traditional media outlets through their daily influence on people's routines is now present in digital media (Mintz, 2019). Based on the affordances (Gibson, 1977) of the platforms, there have been changes in media infrastructures, which have impacted the production routine of journalism professionals. According to Barros et al. (2021), at the same time as news outlets found a socio-technical basis for sharing news on platforms, they began to submit to the logic of these structures. In line with the record number of attacks suffered by Brazilian journalists in 2020 and 2021, analyzed by Landim (2023), it is necessary to understand the new forms of threats to the press, especially those veiled through computer codes, under a friendly and seemingly harmless interface.
According to Barros et al. (2021), large groups such as Google, Facebook, and X (formerly Twitter) are taking advantage of the media's need for visibility and forcing journalism itself, understood as a social institution but also as a business, to change its practices and values. The challenge for media companies is to give in to the logic of the platforms to expand reach and gain engagement, without jeopardizing their editorial principles. In the quest to grow their audience and gain visibility, news outlets run the risk of losing control over the circulation of their content.
For Gillespie (2018), as we adopt computer tools as means of communication, we begin to subject human knowledge to algorithmic logic. Sichman (2021) draws attention to the expansion of Artificial Intelligence, a technology that is increasingly present in journalistic practice, but which can be dangerous given the lack of algorithmic transparency.
Given this scenario, we sought to understand how journalism deals with the dilemma between controlling the circulation of its content and algorithmic mediation, which does not offer transparency in the distribution of information. One of Brazil's main media groups - Globo - has updated its editorial principles with a new section on the use of artificial intelligence in journalism, highlighting human review and other precautions with technology. Launched in 2011 and updated in June 2024, the document brings together the set of principles and values that have always guided the journalistic coverage of the group's vehicles and brands.
To understand the possible threats posed by digital technologies to journalism and the adaptations made to ensure information security, we combined bibliographical research with a documentary survey of the updates to the Globo Editorial Principles. We also carried out empirical research using Audiovisual Materiality Analysis (Coutinho, 2018) - which proposes the identification of narrative conflicts and productions of meaning - of a video report broadcast on TV Globo's Jornal Nacional and shared on digital social networks, which addresses the updating of the editorial principles, in order to understand how the group publicizes the issue to its audience.
2. The digital platform as an infrastructural and economic model
The concept of a platform comes from a union of business, computing, and culture (Poell et al., 2020). It is, therefore, a business model, capable of influencing behavior, aiming for profit: "through communication mediated by the platform space, they store and produce information that generates wealth for the corporation" (Barros et al., 2021, p. 7, our translation). Anne Helmond (2015) calls platformization the "rise of the platform as the dominant infrastructural and economic model of the social web" (p. 11, our translation), through which personal data is collected and algorithms capable of influencing behavior are created. In her studies, Kérley Winques (2022) has shown that, like social institutions - family, church, school - "algorithms are inserted as cultural and infrastructural mediators that need to be confronted socially, technically and expressively" (Winques, 2022, p. 3, our translation). The author's studies are based on the contributions of Jesús Martín-Barbero (2015), for whom cultural mediation is a central concept that refers to the role of cultural practices and the media in the construction and transformation of identities and social relations.
As Mintz (2019) argues, mediatization - the daily influence of the media on people's lives - takes place in the virtual bios (Sodré, 2014), an ecosystem created in the internet environment. This scenario draws attention to the role of algorithms in everyday decisions: “as we adopt computational tools as our primary means of expression and begin to make not only mathematics but all information, digital, we begin to subject human discourse and knowledge to those procedural logics that underpin all computing” (Gillespie, 2018, p. 97, our translation).
To Gillespie (2018), "algorithms also affect the way people search for information, how they perceive and think about knowledge horizons, and how they understand themselves in and through public discourse" (p. 110, our translation). The majority of the public is led to consume content and find answers and entertainment instantly, without reflecting on how cell phones, computers, and televisions are fed and what algorithmic criteria are used. When searching for news or online content, internet users are increasingly trapped in their bubbles, as the algorithms present each individual with the information that best fits their habits and consumption habits. In practice, "the stories presented as the most important can be so different from user to user that there is not even a common object of dialog between them" (Gillespie, 2018, p. 114, our translation).
It is therefore crucial to pay attention to the institutional, political, and economic interests behind the operation of these zeros and ones. The insertion of algorithms in people's routines and in the construction of human knowledge may have, on a large scale, political interests, especially those of big techs. Gillespie (2018) points out that algorithms highlight political values since they can define what is relevant, choose what should appear, and exclude what should not, as they try to predict and get to know users. For the author, the technical nature of the algorithm is considered as a guarantee of impartiality, simply because it seems accurate, when in fact it is full of controversies, especially due to the lack of transparency about how it works.
Meanwhile, the large companies and conglomerates that run the internet accumulate user data, sell it, and continue to operate firmly in the market, even being present in political discussions involving the consumption of content in the digital environment. Information providers have "a stronger voice both in the market and in the corridors of the legislature, and are increasingly getting involved in political debates on consumer protection and digital rights" (Gillespie, 2018, p. 103, our translation), when in fact they are trying to escape restrictions and responsibilities amid a growing context of disinformation.
3. The algorithm as a threat to journalism
Traditional media outlets have lost their monopoly on what is considered to be the truth as the possibilities for producing and sharing content on the internet have expanded. Not surprisingly, media outlets have started to promote various actions based on the potential of media convergence (Landim & Coutinho, 2022) to get closer to this audience, maintaining the credibility they have historically gained and achieving engagement, especially in the face of the countless attempts to discredit the press. In the same way, independent journalists and media outlets are trying to instill in this digital ecosystem the consumption of serious and reliable information, which is verified and checked. Journalism's objectivity, present in newsroom manuals, conventionalized in the work of the press, and taught in communication faculties, clashes, however, with the false algorithmic exemption present on the internet. This promise of machine objectivity impacts not only journalism produced in a convergent way for digital media but also what is done on traditional channels - such as radio, TV, and print - since the content in circulation shapes people's daily perceptions. For Gillespie (2018), journalistic and algorithmic objectivity are not the same thing.
Journalistic objectivity depends on an institutional promise of due diligence, which is embodied and transmitted through a set of norms that journalists learn in training and on the job. The choices made by journalists represent a competence supported by a deeply ingrained, philosophical and professional commitment to put aside their own prejudices and political convictions. The promise of the algorithm, on the other hand, is based much less on institutional norms and acquired skills, and more on a technically influenced promise of mechanical neutrality. Whatever choices are made, they are presented as being both free from intervention by human hands and submerged in the cold workings of the machine. (Gillespie, 2018, p 109, our translation)
If citizens are increasingly searching for news online, media outlets are also expanding their activities in this ecosystem. According to Barros et al. (2021), large groups such as Google, Facebook, and X are taking advantage of media outlets' need for visibility, as they find a socio-technical base on which they can operate, and are forcing journalism itself to change its practices and values. The challenge for media companies is to give in to the logic of the platforms in order to gain more visibility and interactivity with the public while adhering to their own values and editorial principles or those of journalism in general.
This means considering the multiple dimensions of this relationship, from the fact that news outlets find in the platforms the socio-technical basis they need to share news, reports, and even new genres and formats, to the need to adapt their dynamics to keep up with the mechanisms and incessant changes in the logic of the platforms, which impact all work activities in communication. (Barros et al., 2021, p. 4, our translation)
Therefore, as much as platforms challenge news organizations, their means of producing news, and their business models, they are fundamental to sharing informative content made by professional journalists. However, even with clear journalistic objectivity, by submitting to the logic of the platforms, these organizations lose control over their content, to the detriment of a promise of obscure computational exemption, without clear definitions. While journalistic groups have publicized editorial principles, algorithms shape the consumption of information without transparency about how this circulation works in the digital environment.
It should also be noted that media organizations have no control over the context in which users access the news on the platforms, nor do they have access to the criteria that guide the updates of the platforms' algorithms, which drive the traffic of news and other information circulating to users. (Barros et al., 2021, p. 10, our translation)
If the algorithms are defined by the big techs, we are faced with another aggravation: private interest above that of a public nature, the latter from which Journalism is oriented and which sustains its social function. For Van Dijck et al. (2018), democratic journalism depends not only on the autonomy of news organizations but also on defending the public interest in algorithmic mediations and holding big techs accountable for the circulation of information, especially that which threatens the credibility of journalism.
Therefore, all the actors involved in this process must take responsibility for the circulation of information, especially with more transparency on the part of digital platforms: "among the accountability actions demanded of them is the opening up of algorithms and the defense of public social values rather than private values" (Barros et al., 2021, p. 11, our translation). Despite the influence of the economic interests of media outlets on the production routine of journalism, it is fair to consider that these communication groups have editorial rules and journalistic values at their core.
In this sense, it is possible to argue that it is unfair that digital platforms like YouTube do not have the same responsibilities - ethical, legal, and social - for the content that circulates in these digital spaces. In the face of failed attempts to regulate digital platforms at the Brazilian National Congress, in order to increase the responsibility of big techs, we are facing a growing challenge to combat disinformation. Where there are no rules or verification, the false finds space.
In line with the record number of attacks suffered by Brazilian journalists in 2020 and 2021, analyzed by Landim (2023), it is necessary to understand the new forms of threats to the press, veiled through computer codes, under a friendly and seemingly harmless interface. While internet users are kept in their bubbles, which take on the mold and role of shelters for their convictions and tastes, they end up not receiving/having access to news content that could form their opinion on society's broader problems, which is the basic role of journalism. On the contrary, disinformation - which has found a place in the digital environment, without regulation and where anyone can produce content and even claim to be doing journalism - jeopardizes the credibility of credible information produced by professional journalists.
In the ecosystem of networks, structured to maximize the so-called confirmation bias and the rewards and euphoria of receiving news that is agreeable to their beliefs, internet users tend to believe and trust what they believe. This potential threat could be amplified by the promise of expanding artificial intelligence (AI). Sichman (2021) points out that technology itself could make decisions, based on user data, loaded with prejudice and hate speech, putting journalism at risk.
The incorrect use of technology by journalists in their daily work could lead to mistakes being made, such as the dissemination of false, decontextualized, or prejudiced information, which goes against the basic principles of the press. Sichman (2021) advocates the implementation of Responsible AI in companies so that it is possible to foresee consequences for individuals and guarantee the appropriate behavior of the actors involved in the use of technology. In the midst of the fearful and biased black box of algorithms, the author emphasizes that regulators and users demand explanation and clarity about the data used: "methods are needed to inspect algorithms and their results and to manage data, its provenance and its dynamics" (Sichman, 2021, p. 43, our translation).
Just as journalism has taken advantage - and is increasingly taking advantage - of the potential of digital platforms, it has also made more frequent use of artificial intelligence tools to speed up and improve the daily work of reporting. The newspaper O Globo, for example, has developed its virtual assistant, Irineu: readers can ask him to summarize a news story with the main points.
In journalism newsrooms, AI tools have been used, for example, to describe interviews in the construction of texts. These possibilities for using technology, as well as the care involved in the process, were publicized by Grupo Globo in the update of its Editorial Principles carried out in 2024, a process of communication with the public amplified by a report broadcast by TV Globo's Jornal Nacional, which is the subject of this text's analysis.
4. Updating Globo's Editorial Principles with the expansion of Artificial Intelligence
Media outlets face the dilemma of expanding their productions using the potential of digital platforms, such as mass sharing, audience expansion, and interactivity, while at the same time needing to take a series of precautions so that they are not led by the interests of big techs, infiltrated by algorithms, and are not shaped by the misinformation and prejudiced and violent content circulating on the web. Not surprisingly, the necessary precautions for this use of technology are included in the update of Grupo Globo's Editorial Principles (2024), published in June 2024.
In order to understand the possible threats posed by digital technologies to journalism and the adaptations made to guarantee information security, we combined bibliographical research with a documentary search of Grupo Globo's Editorial Principles (2024). Launched in 2011, the document groups together and publicizes the set of principles and values that have always guided the journalistic coverage of the group's vehicles and brands. According to TV Globo, the document has since been updated twice: in 2018, with guidelines on the use of social networks by the group's journalists, and in 2023, with a change on the reporting of massacres and violent mass attacks.
The new section of the editorial principles, specifically on Artificial Intelligence, encourages journalists to explore new tools capable of processing and generating texts, images, videos, and audio, based on commands received. As a guarantee of quality standards, the document states that the incorporation of new technology "has great disruptive potential, but does not alter the values that guide the exercise of professional journalism ... maintaining the commitment to impartiality, correctness, and agility expressed in this document" (Grupo Globo Editorial Principles, 2024, our translation).
The document emphasizes that the use of AI will be transparent and with human supervision, however, it admits the difficulty of carrying out this monitoring on all materials, especially those produced in an automated way.
This does not mean that every piece of content generated in an automated way will have to be reviewed by a human before it is published. This obligation would render much of the efficiency and comprehensiveness allowed by the use of the tool innocuous. The correctness and quality of this content will be guaranteed by process supervision, sample readings, and other forms of checking. (Grupo Globo Editorial Principles, 2024, our translation)
The Group also points out in the document that AI can be used in the most varied stages of the news production process and that the public will be notified about the use of technology when journalists deem it necessary.
This means that, from the briefest note to the most extensive report, technology can be used, to a greater or lesser extent, whenever it contributes to journalistic information being impartial, correct, and provided quickly. In some cases, however, it will be necessary to highlight how artificial intelligence has been used in a particular piece of journalistic content. This will be done whenever it helps the public to understand the circumstances in which the report was produced. (Grupo Globo Editorial Principles, 2024, our translation)
The document lists some examples of the use of technology, such as searching for and gathering information, processing large volumes of data, and accessing reliable databases. It also points out that the final responsibility for the content published will always lie with the professionals involved, and that "journalists will adopt strategies for possible errors and biases produced by artificial intelligence" (Grupo Globo Editorial Principles, 2024, our translation).
One of the precautions pointed out concerns opinion journalism, which should not be left in the hands of artificial intelligence: "opinion and analysis should be reserved for journalists, who can provide the necessary context and perspective" (Grupo Globo Editorial Principles, 2024, our translation). In the case of audio and images, the content must not alter reality and the public must be informed about the use of AI tools.
Finally, the document states that the use of the tools must "strictly observe and respect copyright and intellectual property, both in relation to third-party content and its own materials" (Grupo Globo Editorial Principles, 2024, our translation) and emphasizes that Grupo Globo trains its journalists to use the technology.
In this sense, it is clear that the update of the Editorial Principles encourages journalists to use AI tools in their production routine, but it addresses important points that guide the group's caution in the midst of the dangers present in the use of these tools, including misinformation and the incorrect exploitation of data.
5. The narrative about Editorial Principles in Jornal Nacional
The strategy adopted by the Globo group goes beyond guiding its employees. The updating of editorial principles has been publicized in digital newspaper reports and on TV news programs linked to the group.
In this study, we chose to analyze, through the Globoplay platform, a special nine-minute audiovisual report on the subject, broadcast by Jornal Nacional (JN), TV Globo's highest-rated news program, on June 27, 2024, mainly because of its massive reach and the didactic and informal language of television.
Through the Audiovisual Materiality Analysis (Coutinho, 2018), which proposes the identification of narrative conflicts and productions of meaning in the various components of audiovisual narratives, we sought to understand how the broadcaster exposed the updating of the principles to the public through the report shown on JN.
Based on the analysis model proposed by Coutinho (2018), we established evaluation axes with questions aimed at the object studied, as shown in Table 1.
Table 1. Axes and questions of analysis |
|
Axis of analysis |
Questions |
Benefits and adaptations |
Does the newscast highlight the benefits of using digital platforms and AI tools in journalistic production? |
Have any adaptations been made to the production routine in order to use the tools? |
|
Risks and precautions |
Does the narrative highlight the risks and dangers of using AI tools in journalism? |
Are there clear and detailed notes on the precautions to be taken by the group's professionals? |
|
Transparency and language |
Does the narrative produce the sense that the tools are beneficial to journalism or is it more focused on the risks and dangers? |
Is there transparency about the dangers and impacts - positive or negative - that technology can have on journalism? |
|
What language does the narrator use? Is it colloquial? Is it didactic? Is the narrative clear? |
The biggest highlight, highlighted in the report's enunciation text by presenter William Bonner, was that the use of artificial intelligence will be supervised by humans in the Globo group's newsrooms. The report begins by exploring the benefits of rapid response tools for journalists' daily questioning routine. Reporter Ben-Hur Corrêa, an AI specialist, takes on the role of interviewee in the story and says that he and his colleagues gained in speed and accuracy of investigation. On the screen, as an example for the discussion, an image appears of a person asking a question and answer tool to automatically create an infographic. The narrative then points out that, in order to make the use of tools like this transparent, the topic has been included in the group's editorial principles. The report takes up an archive excerpt from when Globo group leaders first released the document and highlights the two moments when it was updated - in 2018, with guidelines on the use of social networks by Globo journalists to guarantee the impartiality and impartiality of journalistic work, and in 2023, with changes to the reporting of massacres and violent attacks, such as a ban on publishing the image and name of the perpetrators.
In the passage by reporter Pedro Bassan, he uses elements that have been part of the history of journalists' work when it comes to investigating: with a streetcar passing by, a pad of paper, and a pen in his hand, he points out that these were the tools used to find the news; in another passage, he appears typing on a typewriter; finally, he points out that the computer and the internet have improved the work of the press. Thus, he emphasizes that the Globo group is taking advantage of advances in technology to improve its journalistic work while maintaining its commitment to correctness, impartiality, and agility.
As well as stating that there will be human supervision, the narrative reinforces, by highlighting the document on the screen, that this will be done by sample readings, process supervision, and other forms of checking. The report presents the basic precautions to be used at each stage of the news production process, such as access to reliable databases when investigating and respect for copyright when generating artificial content. It also highlights transparency as a fundamental principle, stating that the public should be informed about the use of the tools.
The narrative then explains that AI can help journalists, for example, by quickly analyzing large amounts of content present in documents that serve as the basis for news stories. Reporter Ben-Hur Corrêa, the AI specialist interviewed in the narrative, comes on the scene once again, explaining that the path is always human-machine-human, making it clear that the final word belongs to the journalist.
Finally, the narrative once again talks about the benefits of AI, citing as an example the new tool for summarizing news from the newspaper O Globo, called Irineu - in honor of the newspaper's founder Irineu Marinho. At the end of the narrative, Bassan highlights the principles that will continue to guide journalism, guaranteeing the credibility of information.
It is therefore understood that the newscast explores the benefits of using AI tools in the production routine, and gives examples of how the process could be streamlined and made more efficient but does not mention which tools are in use or could be used in the future. There is also no detailed indication of the adaptations that have taken place or could take place in journalists' routines to adapt to the tools; after all, many professionals still don't know how to use artificial intelligence or don't have the time to learn on their own in the midst of a hectic newsroom routine.
Despite highlighting the maintenance of the group's basic principles and values with the guarantee of the credibility of information - with careful checking, verification, and human review - the narrative does not point out the risks of these tools in the journalistic field. The danger of collecting false data, untrue information, or information laden with prejudice and hate speech, for example, could be highlighted in the narrative.
Therefore, the report is more focused on the benefits than the dangers offered by AI. However, there is a constant attempt to reinforce the group's commitment to taking care to ensure the responsible use of technology. Exploring practical examples and situations experienced by journalists throughout history, the narrative was faithful to television's role of informing with didacticism and colloquialism.
6. Final considerations
In this scenario, the logic of journalism as an opinion-former is subject to the distribution criteria of computing and the particular interests - political, ideological, and commercial - of big techs. In this communication model, even independent journalists with editorial profiles are increasingly dependent on algorithmic criteria. Professionals who claim to be free from the constraints of the values of a particular media outlet are subject to the veiled interests of the large groups that run the digital platforms. While professional journalists and media outlets strive for ethics and care about information, they have to deal with the lack of accountability for content circulating on digital platforms - a phenomenon that increases disinformation and, consequently, clashes against journalism. While a TV channel is held responsible for the content it broadcasts, platforms like YouTube present themselves as just repositories of videos for which they claim to have no responsibility. After countless failed attempts to discuss and regulate platforms in the Brazilian National Congress, we have reached the dilemma that this is a problem that is getting worse. While media outlets are trying to promote media convergence in search of interaction and engagement with a connected public, while journalists are fighting against disinformation and defending professional journalism, they are waiting for the state to regulate digital platforms in order to ensure that the internet does not remain a lawless land. Faced with this scenario, we are witnessing and being part of a communication process permeated by the political, economic, and institutional interests of the big techs, who run the algorithms - the same ones that influence consumption and everyday decisions in a veiled way.
It's true that the potential of digital platforms, such as new infrastructural models, has allowed the circulation of news and interaction between media outlets and their audiences to increase, accompanying a new way of consuming content. However, media outlets and journalists have no idea how this distribution of content takes place on digital social networks. It's no coincidence that we've seen a lot of fake content and hate speech gaining ground and reverberating across platforms. The lack of transparency about how algorithms work is therefore detrimental to democratic communication.
The challenge of taking advantage of this infrastructure without losing sight of the basic principles of journalism and without falling prey to the algorithms of big techs is present in the Globo group's strategies for publicizing the use of artificial intelligence. Updating the document has produced the sense that the group is willing to experiment and use AI tools, under the supervision of the broadcaster's professionals. The JN report's publicization of the measures adopted by the group did not inform viewers of the risks involved in these tools, nor did it explore the impact of the lack of algorithmic transparency. However, it also demonstrated the viewer's confidence in the arrival of technology in newsrooms.
Through its communication strategies to the public, the Globo group seeks to maintain its leading role, and a certain pedagogical role, by pointing out that it is clear about the risks of using AI and the possibilities allowed by digital platforms. Although the dangers of technology and the work of journalism in this digital media ecosystem are not listed in the document, the precautions listed by the group are aimed at finding solutions to these problems. However, this dilemma is not detailed to the viewer. The audience is not warned by the news program about how the use of AI can lead to risks in the production and consumption of news. Journalism newsrooms' adaptations to the use of platforms and artificial intelligence still need further study. What is already clear to us is the expansion of the power and control of platforms to different sectors of society. Uber has created a new work system and has taken a leading role in public transportation. The same can be said of Airbnb, which has disrupted the hotel sector and affected employment. What can we expect from digital social media platforms, which permeate the circulation of information, including journalistic information? The challenge would be to take advantage of their affordances, without submitting to their logic. However, this process would involve the need to look for ways through research, regulation, and media education.
References
Barros, J. V., Marques, A. F., Kinoshita, J., Moliani, J. A., Silva, N. R. da, & Grohmann, R. (2021). A plataformização do trabalho jornalístico: dimensões, regime de publicação e agenda de pesquisa. Avatares - de La Comunicación Y La Cultura, 0(21). https://bit.ly/49HxGtK
Coutinho, I. (2018). Compreender a estrutura e experimentar o audiovisual: da dramaturgia do telejornalismo à análise da materialidade. In C. Emerim, I. Coutinho, & C. Finger. (Org.), Epistemologias do telejornalismo brasileiro (Coleção Jornalismo Audiovisual V. 7, pp. 175-194). Insular.
Gibson, J.J. (1977). The theory of affordance. In R. Shaw, & J. Bransford (Eds.), Perceiving, acting, and knowing: toward an Ecological psychology (pp. 67-82). Lawrence Erlbaum Associates.
Gillespie, T. (2018). A relevância dos algoritmos. Parágrafo, 6(1), 95–121. https://bit.ly/49P9Fkf
Helmond, A. (2015). The Platformization of the Web: Making Web Data Platform Ready. Social Media + Society, 1(2), 1–11. https://doi.org/10.1177/2056305115603080
Jornal Nacional (2024). Grupo Globo atualiza princípios editoriais para incluir uso de inteligência artificial [vídeo]. Globoplay. http://bit.ly/3WxGMoo
Landim, G. (2023). Ameaças para silenciar o mensageiro: ataques e agressões aos profissionais do jornalismo como notícia no Jornal Nacional [Dissertação de mestrado, Universidade Federal de Juiz de Fora]. Repositório UFJF. https://bit.ly/dissertGLandim
Landim, G., & Coutinho, I. (2022). Mídias tradicionais na internet: a plataforma digital como ampliação do discurso da (não) violência contra jornalistas. In Notícias em Pauta (pp. 356-382). Ria Editorial. Retrieved July 9, 2024, from: https://bit.ly/3Eojb3e
Martín-Barbero, J. (2015). Dos meios às mediações. Editora UFRJ, p. 358
Mintz, A.G. (2019). Midiatização e plataformização: aproximações. Novos Olhares, 8(2), 98-109. https://doi.org/10.11606/issn.2238-7714.no.2019.150347
Poell, T., Nieborg, D., & Dijck, J. V. (2020). Plataformização. Fronteiras - Estudos Midiáticos, 22(1). https://doi.org/10.4013/fem.2020.221.01
Sichman, J. S. (2021). Inteligência Artificial e sociedade: avanços e riscos. Estudos Avançados, 35(101), 37–50. https://doi.org/10.1590/s0103-4014.2021.35101.004
Sodré, M. (2014). A ciência do comum: notas para o método comunicacional. Vozes.
TechTudo | Princípios Editoriais do Grupo Globo. (2024). Techtudo. https://bit.ly/4g8D7EE
Van Dijck, J., Poell, T. & De Wall, M. (2018). The platform society: Public Values in a Connective World. Oxford University.
Winques, K. (2022). Imaginários algorítmicos: reflexões a partir de um estudo de recepção de matriz sociocultural. Fronteiras - Estudos Midiáticos, 24(2). https://bit.ly/3Ebseom