- Every day, more than 100 million users of all ages have a safe and positive experience on Roblox.
- We strive to make our systems as safe as possible by default, especially for our youngest users. We do this with our extremely conservative policies and leverage AI to filter inappropriate messages in chat that we detect, including personally identifiable information (outside of Trusted Connections). We proactively moderate content and do not allow the sharing of real-world images in chat.
- Of course, no system is perfect, and one of the biggest challenges in the industry is to detect critical harms like potential child endangerment. A series of friendly chats and supportive messages might take on a different meaning over the course of a longer conversational history, especially when it happens between users of different age groups.
- We have developed Roblox Sentinel, an AI system built on contrastive learning that helps us detect early signals of potential child endangerment, such as grooming, allowing us to investigate even sooner and, when relevant, report to law enforcement.
- In the first half of 2025, Sentinel helped our team to submit approximately 1,200 reports of potential attempts at child exploitation to the National Center for Missing and Exploited Children. This includes attempts to circumvent our filtering mechanisms and other safeguards.
- We are excited to open source Roblox Sentinel, and we are actively seeking community engagement, which we hope will help build a safer internet.
Spending time with friends and competing with other players is a central component of Roblox, and communication is at the heart of those activities. In fact, every day, more than 111 million users come to Roblox, where the community sends an average of 6.1 billion chat messages and generates 1.1 million hours of voice communications in dozens of languages. This communication mirrors the real world—the vast majority is everyday chat, from casual conversations to discussing gameplay, but a small number of bad actors seek to circumvent our systems and possibly attempt to cause harm.
Last month, we shared our vision for age-based communication. We strive to make our systems as safe as possible by default, especially for our youngest users. For example, we do not allow user-to-user image or video sharing via chat. Our systems, while not perfect, are continuously improving and are designed to proactively block personally identifiable information—like phone numbers and usernames—and chat between non-age-verified users is strongly filtered (and not allowed for users under 13). Roblox is one of the largest platforms that require facial age estimation to chat more freely with the people you know. Our goal is to lead the world in safety for online gaming, and we’re committed to open sourcing key safety technology.
Today, we’re releasing our latest open-source model, Sentinel, an AI system to help detect interactions that could potentially lead to child endangerment. Long before something becomes explicit, Sentinel enables us to detect and investigate subtle patterns early, and when relevant, report to law enforcement.
Sentinel has been running on Roblox since late 2024 and is the latest addition to our open-source safety toolkit. In the first half of 2025, 35% of the cases we’ve detected are due to this proactive approach, in many cases catching them before an abuse report could be filed. When combined with our other moderation systems, Sentinel expands the arsenal of tools we have to detect and act on these potentially serious violations.