Copyright news

The law, which comes into effect from December 10, is designed to protect children from the potential mental health risks, cyber-bullying, inappropriate content, and addictive behaviours associated with social media use at a young age. The new legislation was prompted by News Corp’s Let Them Be Kids campaign. With the ban less than two months away, many parents, teens, and social media users are left wondering exactly how it will work and which platforms are affected. Here’s everything you need to know. When will Australia’s social media ban come into effect? Australia’s new Online Safety Amendment (Social Media Minimum Age) Act 2024 sets a minimum age of 16 for having an account on major social media platforms. The ban, taking effect from December 10, requires companies to verify users’ ages and prevent under-16s from opening or maintaining accounts. Platforms that fail to comply could face fines of up to $49.5 million. Children on TikTok and Snapchat will have their content archived and their accounts frozen from mid-December, while Instagram and Facebook accounts will be either temporarily paused, deactivated or deleted. What sites and apps are affected? The government has identified the following as “age-restricted social media platforms”: TikTokInstagramFacebookSnapchatX (formerly Twitter)YouTube Are gaming platforms like Roblox included in the ban? Online gaming and streaming platforms such as Roblox, Twitch, and Steam are still under review if they will be exempt from the ban. Are messaging platforms like Discord or Messenger included in the ban? Messaging apps like WhatsApp, Messenger, and Messenger Kids are expected to be exempt from Australia’s new under-16 social media ban, as they’re classed as private communication tools rather than public networking platforms. However, Discord remains under review, with regulators considering whether its public servers and community channels make it function more like social media. Telegram and Signal are also being assessed but are likely to remain outside the ban for now. What do the social media companies have to do? Under the new guidelines, social media companies must take “reasonable steps” — using a “multilayered waterfall approach” — to stop children under 16 from having accounts on their platforms. Examples of these steps include detecting and deactivating underage accounts and preventing users from creating new profiles after being removed. Platforms won’t be required to use any specific technologies, including those trialled in recent age-assurance tests. They also can’t rely solely on self-declared ages and must continually review and improve their systems to ensure ongoing compliance. What age verification will be needed? Social media companies won’t have to verify the age of every user, as blanket age checks have been deemed “unreasonable.” They also can’t require people to provide government ID as the only way to prove their age, and must offer practical alternatives. In addition, platforms aren’t expected to store personal data collected during age verification — record-keeping will focus on their overall systems and compliance processes, not individual users. Underage users should not be automatically ported to alternative platforms – like from Facebook to Messenger Kids – without explicit opt-in. How tech companies meet these expectations will be watched closely by other countries who are considering following Australia’s lead. What happens if underage people do access these sites? If underage people access social media sites after a ban, they will not face penalties; the responsibility falls entirely on the social media platforms to enforce the ban and face fines for noncompliance. What about existing young social media users? Underage users on Snapchat and TikTok will be able to archive their existing posts as part of the process of freezing these accounts, the platforms confirmed on Tuesday. Teen TikTok users will have the choice to deactivate their account, suspend it, or fully delete it. Snapchat will make a “Download my Data” tool available to teen users, which vice president Jennifer Stout said would “secure photos and communications before accounts are disabled and lost”. Why is an age limit needed? The government says the age limit is needed to protect children from the harms of social media. Research has linked heavy or early social media use to issues such as anxiety, depression, body image concerns, online bullying, and exposure to inappropriate or addictive content. Supporters argue that many platforms were never designed for young teens and that stronger safeguards are necessary to prevent companies from collecting children’s data or targeting them with ads. The age limit is also intended to give children more time to develop emotionally and socially before engaging with online environments that can expose them to adult material, misinformation or predatory behaviour. Do other countries have similar rules? While other countries have age restrictions for social media, Australia is taking a world-first step with a nationwide minimum age of 16 for holding accounts. In the US, UK, Canada, and New Zealand, the minimum is typically 13, while in South Korea parental approval is required for under-14s. The EU generally requires parental consent for under-16s, though some member states allow sign-ups from 13. What sets Australia apart is that the law focuses on account access itself, not just data collection, and places the responsibility squarely on platforms to block underage users. Last month, the Danish government announced it will follow Australia’s lead with plans to implement a social media age limit for children. Could there be problems? Some critics have warned the policy could be difficult to enforce and may limit young people’s ability to connect, learn, and express themselves online. Originally published as Everything you need to know about the under-16s social media ban in Australia