Content Moderation

By Koo App

Koo’s approach to content moderation – a user manual​

Koo’s core mission is to provide our users a safe and trusted platform to be the voices of India. We aim to provide our users a wholesome community to meaningfully engage with each other in a language of their choice. To help us achieve that, we expect all our users to abide by our Community Guidelines which are available in multiple languages. 

Koo Community Guidelines are designed to foster a safe and respectful space for our users to express themselves, with the highest regard for users’ freedom of speech and expression. Koo encourages a free exchange of thoughts and ideas among our users while at the same time adhering to the letter of law and our legal responsibility to remove content which violates Koo’s Community Guidelines. 

This section will help users understand Koo’s approach to Content Moderation in accordance with Koo’s obligations as a Significant Social Media Intermediary under the the Information Technology (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021

1. What types of actions does Koo take on Content which is in violation of its Community Guidelines?

(i) Action on Content: We may remove Koos, Re-Koos, comments, profile photos, handle names and profile names, with or without prior notice, if they are found to violate our Community Guidelines. However, in such cases, action will not affect the account or the data connected a users’ account in any manner.

While we take utmost care and exercise due caution in our content moderation practices, on occasion we may make a mistake. If you are of the view that your content was removed in error and wish to reinstate the content, you are welcome to submit an appeal for reinstatement here and we are happy to reconsider.

(ii) Action on User Profiles: In case a user is found to be repeatedly violating our Community Guidelines or engaging in any illegal activities, we may take appropriate action to restrict their content or permanently remove them from the platform. 

2. What types of Content are removed?

As a general rule, the following types of content are removed. We also remove content which is the subject matter of an order from a judicial or other empowered authority. Such orders can be submitted at this link.

(i) Hate speech and discrimination: Content which contains excessively inflammatory or instigating or provoking or demeaning language or other content against a person, religion, nation or any group of persons. Criticism or dislike may not amount to hate speech and discrimination. 

(ii) Terrorism and extremism: Content which promotes or supports terrorist organisations listed on the website of the Ministry of Home Affairs.

(iii) Abuse words: Content containing abusive words in each of the languages Koo operates in. In collaboration with government and non-government organisations, we have created a list of abuse words depending on the frequency of use, context in present day, etc. This list is updated frequently.

(iv) Suicide and self-harm: Content which contains or depicts actions of causing actual bodily harm or death to oneself or incites someone to do so.

(v) Religiously offensive: Any content where –
(a) names or symbols or emblems or books or flags or statues or buildings of a religion are morphed or damaged or mutilated or made fun of or desecrated;
(b) gods or religious deities or prophets or figureheads or reincarnations and leaders of a religion are abused or derogatory terms are used for them.

(vi) Violent: Content containing excessive blood, gore, internal organs or acts of mutilation, decapitation, beating or harming a body (human or animal).

(vii) Graphic, obscene or sexual in nature & sexual harassment: Content depicting nudity or sexual acts, especially containing women and children. Also, any unwelcome sexual conduct towards another user especially females. It is prudent to note that in such cases, your intention does not matter; it is about how the act is perceived by the user on the receiving end. Therefore, we encourage you to exercise utmost discretion.

(viii) Private information: Content containing information pertaining to or photos of government-issued identification documents, bank documents, email ID, phone number or other personal information of a person or group of persons. 

(ix) Child safety: Koo considers the online safety of children paramount and has zero tolerance for any content depicting, directly or indirectly: any abuse, nudity, harm or invasion of privacy of children. 

(x) National Emblems: Content with insults or destruction or other unlawful treatment of the Indian national emblems. Koo strictly follows Indian laws on this topic including, The Emblems And Names (Prevention Of Improper Use) Act, 1950,  The State Emblem Of India (Prohibition Of Improper Use) Act, 2005

Other categories of content, which may require investigation or adjudication or orders from judicial or other authorities, will be removed after completion of these steps. 

3. How does Koo identify violation of its Community Guidelines

(i) Human Moderation: In App Reporting – Any registered user can report violation of the Community Guidelines by clicking on the two dots on the top right corner of a Koo/Comment/Re-Koo and selecting the appropriate reason for reporting. Our team of moderators will review the reported Koo and take action, as required. 

(ii) Automated tools: Koo deploys and continues to examine several automated detection tools to help in content moderation and keeping Koo platform safe:

  • Koo has collaborated with the Central Institute of Indian Languages to create a corpus of expressions including words, phrases, abbreviations and acronyms that are considered offensive or sensitive across 22 languages and act upon such content. This is an effort to reduce abuse and promote a fair use of language among our users.
  • In addition, Koo has created its own corpus of abusive phrases and spam content based on frequency of their use and context as seen on the platform and uses self-developed automated tools to identify and remove such content. 
  • Koo is currently experimenting with various vendors to develop cloud-based AI models to create visual content moderation tools especially in the context of nudity and child sexual abuse. 

Leave a Comment

Your email address will not be published. Required fields are marked *