“Since then, we have seen the community of fact-checkers grow in India to the point where India is actually the country with the most fact-checkers in the world certified by the International Fact-Checking Network (IFCN) “, she said. “We want to make sure that the fact checks they produce surface on our products. And so, as a result, you can actually see fact checks on Google Search, Google News, as well as YouTube, in both English and Hindi.
Liu said that as journalists, journalists are on the front lines of the fight against misinformation and it is in this spirit that the Google News Initiative (GNI) India Training Network was established. She said it was about bringing the skills of fact-checking, digital safety and security and other topics to newsrooms where working journalists can teach others.
“We have 250 trainers across the country who can train in over 10 languages. And since we launched this program in 2018, we have trained 38,000 journalists, media educators, journalism students and fact checkers across the country,” she said. “We are very excited to expand this program this year to new topics related to fact-checking and countering misinformation. »
She said issues such as how to use data to help improve fact-checking and how one can fact-check topics such as climate change, will be among some of the topics that will be covered in upcoming programs in course of the next few months.
On the policy front, Clement Wolf, senior director of public policy, information integrity at Google, said the company has policies in place covering a range of harmful content and behavior. These include hate speech against harassment and variations of misinformation and disinformation, among many others.
Discover the stories that interest you
“We continue to evolve these policies over time to respond to new threats or new opportunities to do good work,” he said. “Enforcing these policies at scale is one of the challenges of operating platforms like ours. »
To do this, he said Google and the many platforms it operates like YouTube, for example, rely on flags or notices that normal users or others flag as breaking the rules. Wolf said the company’s machine learning systems help him do this work at scale, but stressed that context is key when evaluating content for misinformation or misinformation.
“We really rely on that complementarity between human reviewers who have trained and understand the nuances of politics and the machine learning systems whose job it is to elevate content for these people to review. And, of course, the outcome of reviews informs machine learning systems so they can do better over time.
Wolf said Google removed more than 3.4 billion ads in 2021 and in the last quarter of 2021 took action on nearly 4 million channels. “We have policies for each of our services,” he explained. “These policies vary from service to service. Even though we all consider the same harms in different services, these services do not have the same goals or the same user expectations and therefore we might react to these harms differently from one service to another.