Google banned deepfake-generating AI on Colaboratory

Previously, Google Colaboratory was a relatively free platform. However, a change in the terms of use brings more restriction.

Google created Colaboratory or Colab in order to facilitate access to computer equipment. The platform notably allows the operation and training of artificial intelligence (AI). Generally, its users could freely generate different contents. The problem is that people can exploit it for malicious purposes. This weekend, Bleeping Computer just noticed a change in the terms of use of the platform. This one mainly affects deepfakes. Henceforth, this type of practice is classified as a banned project.

A new rule against deepfakes

Google would have operated the modification of the terms of use around the half of may. The company just hasn’t made an announcement about it. Users only became aware of this when they had attempted to run a deepfake generator. DeepFaceLab is one of them. Concretely, everyone who has tried to launch DeepFaceLab on Colab has received an error message.

The deepfake is a practice a priori funny, but it still has a more obscure use.

“You may be executing code that is not authorized, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ. »

Google Colaboratory

However, some codes which are used to generate deepfakes do not lead to this warning. The reddit users among others reported that FaceSwap remains fully operational. Moreover, it is a fairly popular code in the industry.

Therefore, it is possible that the restriction is based on a blacklist and not on an automated recognition. This would imply that the fight against deepfakes would depend more or less on the contribution of the Colab community.

An excessive measure on the part of Google?

A Google spokesperson brought more precision to TechCrunch in an email. He claimed that the company remains alert about the possibility of misuse of Colab. However, the entity wishes manage these abuses in a balanced way. The point is that the platform aims to facilitate access to important computing resources like GPUs. Regarding the abuse detection methodGoogle prefers the keep secret.

Google prefers to play the security card by prohibiting deepfakes.

“Abuse deterrence is an ever-evolving practice, and we cannot disclose specific methods, as adverse parties can take advantage of knowledge to evade detection systems. In general, we have automated systems that detect and prohibit many types of abuse. »

Google Spokesperson

Re the deepfakethe practice is to use an AI to replace convincingly a person’s faceon an image or video, by another, hence the name “deepfake”. The renderings surpass the works of photoshop and sometimes even Hollywood CGI.

A priori, this kind of content is harmless, even playful. Several videos in this category have even gone viral. Unfortunately, some individuals use it for more obscure purposes. The hackers can, for example, use it to target internet users in the framework of fraud Where extortion. Moreover, deepfakes can be used in even more important areas such as politics.

Leave a Comment