What Regulations Are On The Horizon For The Metaverse?
How are the regulations for the metaverse shaping up? New legislation and upcoming regulations looks set to have big implications for creators and users within the metaverse, and the ramifications are broad. They encompass broadcast and media regulations, data and privacy regulations, intellectual property (IP) issues, artificial intelligence (AI), and content and interactions.
Media & IP
From a media regulation and IP perspective, what issues could arise from a live concert in the metaverse? The first thing to consider is rights clearance. Using the example of a live concert, if you were to play other people’s music in the metaverse, you might need to prepare a bespoke agreement for how content is licensed. This would align with your terms of service. Also, licenses are usually granted on a territorial basis, so you would need to consider adding the metaverse to the license for clarity.
Censorship and content standards represent two other key areas of focus. Legal compliance teams need to know the certain kinds of content that some governments and countries will censor or restrict. Furthermore, the need to consider what artists can or can’t include in their performances will be an issue.
Attention also needs paying to video and media regulations. Currently, no metaverse law exists, although, as with any new development, a network of existing laws that can apply would come into play. Since the metaverse runs in audiovisual format, considering how content may interact with existing broadcast and audiovisual regulations could become a further issue. The cornerstone of audiovisual regulations in the EU is the Audiovisual Media Services Directive. The latest update took place in 2018, and national implementations are ongoing. If you are a metaverse platform provider, you would also need to consider whether the platform falls under the category of “video-sharing platform services”.
Data & AI
What data and AI regulatory frameworks might apply in the UK and EU when hosting a concert in the metaverse? Data in the metaverse would mainly be covered by the UK and Europe General Data Protection Regulations (GDPR). However, very little in GDPR is actually specific to AI. GDPR will have massive implications for all metaverse participants simply because much more data could potentially be collected at an event in the metaverse than at an actual concert in the real world, for example.
Data privacy issues in the metaverse will become an extension of regulations already used for the internet. Providers need to make sure protocols in place ensure any data collection happens fairly and transparently. They should also provide privacy notices and have consent for the use of any tracking technologies.
Platform providers, payment providers, and people selling virtual art, non-fungible tokens, virtual merchandise, and goods need to make sure that they have contracts in place to deal properly with data privacy issues. They need to consider the possible risks involved and think of it as a compliance exercise.
The Artificial Intelligence Act outlines different levels of regulation depending on the different perceived risks posed by different types of AI. Some use of AI relevant to the metaverse may rate as unacceptable or in the high-regulation and high-risk bracket if they amount to subliminal, exploitative, or manipulative techniques that cause harm and/or involve automated face recognition or other types of biometrics.
Content & Interaction
The recent growth of content and interaction in the metaverse brings with it more privacy challenges. Traditionally, online harm was the main focus of media regulators, but privacy regulators now have this new field to deal with.
Protecting minors as a privacy issue has become a global challenge, and a number of authorities have issued guidelines for online services. The UK regulator, the Information Commissioner’s Office, published the Age Appropriate Design Code with 15 standards that these services need to follow.
Digital content and private devices have completely changed the way that we consume content. This brought new methods of protecting minors on digital devices, including putting clear and visible labels on content as well as technical filters and video streaming services that display ratings before users click “play”. In the games industry, codes of conduct also came into play in the form of moderation policies and procedures for sanctioning disruptive behavior.
In the metaverse, media consumption happens in a shared space. This brings new challenges, in particular, regarding the content that people may interact with, which has a much broader range in virtual worlds.
Toxic Behavior: Evolving Responses
The response to disruptive behavior and toxic users in the metaverse is still evolving. The development of codes of conduct, flagging mechanisms, and ways to generally deal with problematic behavior poses new and interesting challenges. As with all new developments, it’s important to monitor how new legislation and regulations address these challenges.
In general, there are three types of online content: content that is generally banned because it violates criminal laws, content that has age restrictions and requirements, and content suitable for all audiences.
Currently, the regulatory focus lies primarily with service providers. However, regulations are shifting to address other actors. For example, the French Senate recently adopted a draft bill aimed at strengthening parental control on the internet. This would require operating systems to install some parental controls on devices by default. At the moment, similar legislation is also being discussed in Germany.
Online harm is an area that has been under growing scrutiny from media regulators, law enforcement agencies, data privacy authorities, and consumer protection bodies. The data protection authorities’ focus is increasingly on the protection of minors. This will inevitably increase as users produce more data. This issue is leading to national legislation in the EU and around the world intended to ensure safety and reduce instances of online harm.
What does this mean for companies in the metaverse?
These issues have global relevance, and a debate around similar regulations outside of the UK and Europe has begun. For example, California legislators have started working on a draft Age Appropriate Design Code Act. Other US initiatives taking shape include federal regulation of AI and reforms of Safe Harbour protections regarding user-generated content.
What is the main takeaway from all of this? It makes sense to think about compliance by design and a framework that addresses these various issues. It is also important to remain flexible enough to accommodate the differences between regional and national legislation.