Traditional content regulation has been with us for more than a century. Today, technological and social changes have made it even easier to identify, filter and report inappropriate material online. Content regulators in the EU are taking action to ensure that their citizens can access lawful, quality information of all kinds via technology-enabled platforms. We’ll take you step by step through this new world of content regulation in the European Union as adopted by Council Regulation (EU) on online media services and other technical and regulatory requirements for online platforms and apply it to digital media platforms.
What is Content Regulation?
Content regulation is the act of designing and enforcing rules that govern the use of computers and the Internet. Content regulation is a response to the growth of online services and their ability to distribute inappropriate or harmful content. It is an authority that determines what can and cannot be shared online. Internet service providers (ISPs) and website hosts are required to ensure that all users have set up filters that remove harmful or inappropriate content. In the EU, consumers are protected by a number of different institutions, including the European Commission, the Court of Justice of the European Union, and the European Data Protection Board. Consumers also have the right to complain to their local data protection board or their local consumer rights authority.
Why is online content moderation important?
When it comes to online media services, there is a strong connection between what you watch and what you learn. You should be able to navigate easily between videos, pictures, and articles that present a balanced view of the world around you.
That is why it is important to set up filtering software on your computer to keep out harmful content. But there is a darker side to this. People who prey on the online divide (e.g. pedophiles, terrorists, hackers, identity fraudsters) often try to use social media platforms or email to distribute their content.
Social media platforms and email providers frequently do not have enough power to fully remove offensive content or downrank harmful or deceptive emails. That is why it is important to be on the lookout for these problems and to report them to the relevant authorities so that corrective actions can be taken.
What measures are being taken against incitement to terrorism online?
Fragmented and fragmented-looking terrorism legislation leaves many countries, including the EU, with no actual legal or regulatory protection against incitement to terrorism. This is why the European Commission has adopted a proposal to replace the existing fragmented and fragmented-looking terrorism legislation with a uniform, integrated approach. This will provide a level playing field for all stakeholders, including the right to free expression and the right to receive information, both of which are under threat online. The new legislation will criminalize the following behavior:
- Offensive speech that incites people to violence or terrorism (incitement to discrimination or hatred based on race, religion, sexual orientation, or other identity or belief)
- Offer or advice to commit any of the above crimes.
- The offender will be punished by a fine or imprisonment.
Online content regulation is vital in order to keep people safe from searching for harmful content on the Internet. The European Union, for example, adopts one-hour terrorist content rules that have been working hard to achieve good changes that can punish those who break the law by taking the necessary measures.