Content Moderation Guidelines

By Koo App

Koo’s approach to content moderation – a user manual​

Koo’s core mission is to provide our users a safe and trusted platform to be the voices of India. We aim to provide our users a wholesome community to meaningfully engage with each other in a language of their choice. To help us achieve that, we expect all our users to abide by our Community Guidelines which are available in multiple languages. 

Koo Community Guidelines are designed to foster a safe and respectful space for our users to express themselves, with the highest regard for users’ freedom of speech and expression. Koo encourages a free exchange of thoughts and ideas among our users while at the same time adhering to the letter of law and our legal responsibility to remove content which violates Koo’s Community Guidelines. 

This section will help users understand Koo’s approach to Content Moderation in accordance with Koo’s obligations as a Significant Social Media Intermediary under the the Information Technology (Intermediary Guidelines & Digital Media Ethics Code) Rules, 2021

1. What types of actions does Koo take on Content which is in violation of its Community Guidelines?

(i) Action on Content: We may remove Koos, Re-Koos, comments, profile photos, handle names and profile names, with or without prior notice, if they are found to violate our Community Guidelines. However, in such cases, action will not affect the account or the data connected a users’ account in any manner.

While we take utmost care and exercise due caution in our content moderation practices, on occasion we may make a mistake. If you are of the view that your content was removed in error and wish to reinstate the content, you are welcome to submit an appeal for reinstatement here and we are happy to reconsider.

(ii) Action on User Profiles: In case a user is found to be repeatedly violating our Community Guidelines or engaging in any illegal activities, we may take appropriate action to restrict their content or permanently remove them from the platform. 

2. What types of Content are removed?

As a general rule, the following types of content are removed. We also remove content which is the subject matter of an order from a judicial or other empowered authority. Such orders can be submitted at this link.

(i) Hate speech and discrimination: Content which contains excessively inflammatory or instigating or provoking or demeaning language or other content against a person, religion, nation or any group of persons. Criticism or dislike may not amount to hate speech and discrimination. 

(ii) Terrorism and extremism: Content which promotes or supports terrorist organisations listed on the website of the Ministry of Home Affairs.

(iii) Abuse wordsContent containing abusive words in each of the languages Koo operates in. In collaboration with government and non-government organisations, we have created a list of abuse words depending on the frequency of use, context in present day, etc. This list is updated frequently.

(iv) Suicide and self-harm: Content which contains or depicts actions of causing actual bodily harm or death to oneself or incites someone to do so.

(v) Religiously offensive: Any content where –
(a) names or symbols or emblems or books or flags or statues or buildings of a religion are morphed or damaged or mutilated or made fun of or desecrated;
(b) gods or religious deities or prophets or figureheads or reincarnations and leaders of a religion are abused or derogatory terms are used for them.

(vi) Violent: Content containing excessive blood, gore, internal organs or acts of mutilation, decapitation, beating or harming a body (human or animal).

(vii) Graphic, obscene or sexual in nature & sexual harassment: Content depicting nudity or sexual acts, especially containing women and children. Also, any unwelcome sexual conduct towards another user especially females. It is prudent to note that in such cases, your intention does not matter; it is about how the act is perceived by the user on the receiving end. Therefore, we encourage you to exercise utmost discretion.

(viii) Private information: Content containing information pertaining to or photos of government-issued identification documents, bank documents, email ID, phone number or other personal information of a person or group of persons. 

(ix) Child safety: Koo considers the online safety of children paramount and has zero tolerance for any content depicting, directly or indirectly: any abuse, nudity, harm or invasion of privacy of children. 

(x) National Emblems: Content with insults or destruction or other unlawful treatment of the Indian national emblems. Koo strictly follows Indian laws on this topic including, The Emblems And Names (Prevention Of Improper Use) Act, 1950,  The State Emblem Of India (Prohibition Of Improper Use) Act, 2005

(xi) Impersonation: Posing as another person, brand or organization in a confusing or deceptive manner. This includes using the same or similar name, handle, profile photo and/or posting pictures, logos or names of another person, brand or organization without their explicit consent. Due to the exceptional influence they may exert, posing as a prominent political, religious or social leader or an officer designated by the government of any nation is not permitted and may lead to removal of impersonating content or suspension or termination of the Koo account. 

Other categories of content, which may require investigation or adjudication or orders from judicial or other authorities, will be removed after completion of these steps. 

3. How does Koo identify violation of its Community Guidelines

(i) Human Moderation: In App Reporting – Any registered user can report violation of the Community Guidelines by clicking on the two dots on the top right corner of a Koo/Comment/Re-Koo and selecting the appropriate reason for reporting. Our team of moderators will review the reported Koo and take action, as required. 

(ii) Automated tools: Koo deploys and continues to examine several automated detection tools to help in content moderation and keeping Koo platform safe:

  • Koo has collaborated with the Central Institute of Indian Languages to create a corpus of expressions including words, phrases, abbreviations and acronyms that are considered offensive or sensitive across 22 languages and act upon such content. This is an effort to reduce abuse and promote a fair use of language among our users.
  • In addition, Koo has created its own corpus of abusive phrases and spam content based on frequency of their use and context as seen on the platform and uses self-developed automated tools to identify and remove such content. 
  • Koo is currently experimenting with various vendors to develop cloud-based AI models to create visual content moderation tools especially in the context of nudity and child sexual abuse. 
Koo’s approach to content moderation – a user manual​​

Koo’s core mission is to provide our users a safe and trusted platform for micro blogging in regional languages. We aim to provide our users with a wholesome community to meaningfully engage with each other in a language of their choice. To help us achieve that, we expect all our users to abide by our Community Guidelines. 

Koo’s Community Guidelines are designed to foster a safe and respectful space for our users to express themselves, with the highest regard for users’ freedom of speech and expression. Koo encourages a free exchange of thoughts and ideas among our users while at the same time adhering to the letter of law and our legal responsibility to remove content which violates Koo’s Community Guidelines. 

This section will help users understand Koo’s approach to Content Moderation.

1. What types of actions does Koo take on Content in violation of its Community Guidelines?

(i) Action on Content: We may remove Koos, Re-Koos, comments, profile photos, handle names and/or profile names, with or without prior notice, if they are found to violate our Community Guidelines. However, in such cases, Koo’s action will not affect the user account or the data connected with such user account in any manner.

(ii) Action on User Profiles: In case a user is found to be repeatedly violating our Community Guidelines or engaging in any illegal activities, we may take action to restrict their content or visibility or interaction with other accounts. On extreme and repeated violations or in compliance with government orders, we may permanently remove a user from the platform. 

A detailed description of the profile actions and their rationale can be found at this link.

2. What types of Content are removed?

As a general rule, the following types of content are removed. We also remove content which is the subject matter of an order from a judicial or other empowered authority. Such orders can be submitted at this link.​

(i) Hate speech and discrimination: Content which contains excessively inflammatory or instigating or provoking or demeaning language or other content against a person, religion, nation or any group of persons. Caution is exercised especially with respect to content which may lead to imminent violence. Criticism or dislike may not amount to hate speech and discrimination. 

(ii) Terrorism and extremism: Content which promotes or supports terrorist organisations identified by a government authority, including content that disseminates information on behalf of such organizations.

(iii) Abuse words: Content containing abusive words in each of the languages Koo operates in. In collaboration with government and non-government organizations, we have created a list of abuse words depending on the frequency of use, context in present day, etc. This list is updated frequently.​

(iv) Suicide and self-harm: Content which contains or depicts actions of causing actual bodily harm or death to oneself or incites someone to do so.​

(v) Religiously offensive: Content where names, symbols, emblems, books, flags, statues or prominent buildings of a religion are morphed, damaged, mutilated, made fun of or desecrated or gods, religious deities, prophets, figureheads or reincarnations and leaders of a religion are abused or derogatory terms are used for them in a manner that may cause undue harassment to a user or groups of users.

(vi) Violent: Content containing excessive blood, gore, internal organs or acts of mutilation, decapitation, beating or harming a body (human or animal).​ 

(vii) Graphic, obscene or sexual in nature & sexual harassment: Content depicting nudity or sexual acts, especially containing women and children. Also, any unwelcome sexual conduct towards another user especially females. It is prudent to note that in such cases, your intention does not matter; it is about how the act is perceived by the user on the receiving end. Therefore, we encourage you to exercise utmost discretion.​

(viii) Private information: Content containing information pertaining to or photos of government-issued identification documents, bank documents, addresses, email ID, phone number or other personal information of a person or group of persons. ​

(ix) Child safety: Koo considers the online safety of children paramount and has zero tolerance for any content depicting, directly or indirectly: any abuse, nudity, harm or invasion of privacy of children.

(x) Impersonation: Posing as another person, brand or organization in a confusing or deceptive manner. This includes using the same or similar name, handle, profile photo and/or posting pictures, logos or names of another person, brand or organization without their explicit consent. Due to the exceptional influence they may exert, posing as a prominent political, religious or social leader or an officer designated by the government of any nation is not permitted and may lead to removal of impersonating content or suspension or termination of the Koo account. 

 

Other categories of content, which may require investigation or adjudication or orders from judicial or other authorities, will be removed after completion of these steps. ​

3. How does Koo identify violation of its Community Guidelines​

(i) Human Moderation: In App Reporting – Any registered user can report potential violations of Koo’s Community Guidelines by clicking on the two dots on the top right corner of a Koo/Comment/Re-Koo and selecting an appropriate reason for reporting. Our team of moderators will review the reported Koo and take action, as required. ​

(ii) Automated tools: Koo deploys and continues to examine several automated detection tools to help in content moderation and keeping Koo platform safe:​

  • Koo has created a corpus of expressions including words, phrases, abbreviations and acronyms that are considered offensive or sensitive across its operating languages and acts upon such content. This corpus is created by Koo in collaboration with government and non-governmental organizations and also based on the frequency of their use and context as seen on the platform. Koo uses self-developed automated tools to identify and remove such content. This is an effort to reduce abuse and promote a fair use of language among our users.
  • Koo is currently experimenting with various vendors to develop cloud-based AI models to create visual content moderation tools especially in the context of nudity and child sexual abuse. 

While we take utmost care and exercise due caution in our content moderation practices, on occasion we may make a mistake. If you are of the view that your content was removed in error and wish to reinstate the content, you are welcome to submit an Appeal for reinstatement and we are happy to reconsider.

Similarly, if you are of the view that your profile has been incorrectly acted upon, please use this form for Reinstatement of Profile

Koo’s approach to content moderation – a user manual​​

Koo’s core mission is to provide our users a safe and trusted platform for micro blogging in regional languages. We aim to provide our users with a wholesome community to meaningfully engage with each other in a language of their choice. To help us achieve that, we expect all our users to abide by our Community Guidelines. 

Koo’s Community Guidelines are designed to foster a safe and respectful space for our users to express themselves, with the highest regard for users’ freedom of speech and expression. Koo encourages a free exchange of thoughts and ideas among our users while at the same time adhering to the letter of law and our legal responsibility to remove content which violates Koo’s Community Guidelines. 

This section will help users understand Koo’s approach to Content Moderation.

1. What types of actions does Koo take on Content in violation of its Community Guidelines?

(i) Action on Content: We may remove Koos, Re-Koos, comments, profile photos, handle names and/or profile names, with or without prior notice, if they are found to violate our Community Guidelines. However, in such cases, Koo’s action will not affect the user account or the data connected with such user account in any manner.

(ii) Action on User Profiles: In case a user is found to be repeatedly violating our Community Guidelines or engaging in any illegal activities, we may take action to restrict their content or visibility or interaction with other accounts. On extreme and repeated violations or in compliance with government orders, we may permanently remove a user from the platform. 

A detailed description of the profile actions and their rationale can be found at this link.

2. What types of Content are removed?

As a general rule, the following types of content are removed. We also remove content which is the subject matter of an order from a judicial or other empowered authority. Such orders can be submitted at this link.​

(i) Hate speech and discrimination: Content which contains excessively inflammatory or instigating or provoking or demeaning language or other content against a person, religion, nation or any group of persons and which may cause imminent violence. Criticism or dislike may not amount to hate speech and discrimination. 

(ii) Terrorism and extremism: Content which promotes or supports terrorist organisations identified by a government authority, including content that disseminates information on behalf of such organizations.

(iii) Abuse words: Content containing abusive words in each of the languages Koo operates in. In collaboration with government and non-government organizations, we have created a list of abuse words depending on the frequency of use, context in present day, etc. This list is updated frequently.​

(iv) Suicide and self-harm: Content which contains or depicts actions of causing actual bodily harm or death to oneself or incites someone to do so.​

(v) Religiously offensive: Content where names, symbols, emblems, books, flags, statues or prominent buildings of a religion are morphed, damaged, mutilated, made fun of or desecrated or gods, religious deities, prophets, figureheads or reincarnations and leaders of a religion are abused or derogatory terms are used for them in a manner that may cause undue harassment to a user or groups of users.

(vi) Violent: Content containing excessive blood, gore, internal organs or acts of mutilation, decapitation, beating or harming a body (human or animal).​ 

(vii) Graphic, obscene or sexual in nature & sexual harassment: Content depicting nudity or sexual acts, especially containing women and children. Also, any unwelcome sexual conduct towards another user especially females. It is prudent to note that in such cases, your intention does not matter; it is about how the act is perceived by the user on the receiving end. Therefore, we encourage you to exercise utmost discretion.​

(viii) Private information: Content containing information pertaining to or photos of government-issued identification documents, bank documents, addresses, email ID, phone number or other personal information of a person or group of persons. ​

(ix) Child safety: Koo considers the online safety of children paramount and has zero tolerance for any content depicting, directly or indirectly: any abuse, nudity, harm or invasion of privacy of children. ​

(x) Impersonation: Posing as another person, brand or organization in a confusing or deceptive manner. This includes using the same or similar name, handle, profile photo and/or posting pictures, logos or names of another person, brand or organization without their explicit consent. Due to the exceptional influence they may exert, posing as a prominent political, religious or social leader or an officer designated by the government of any nation is not permitted and may lead to removal of impersonating content or suspension or termination of the Koo account. 

Other categories of content, which may require investigation or adjudication or orders from judicial or other authorities, will be removed after completion of these steps. ​

3. How does Koo identify violation of its Community Guidelines​

(i) Human Moderation: In App Reporting – Any registered user can report potential violations of Koo’s Community Guidelines by clicking on the two dots on the top right corner of a Koo/Comment/Re-Koo and selecting an appropriate reason for reporting. Our team of moderators will review the reported Koo and take action, as required. ​

(ii) Automated tools: Koo deploys and continues to examine several automated detection tools to help in content moderation and keeping Koo platform safe:​

  • Koo has created a corpus of expressions including words, phrases, abbreviations and acronyms that are considered offensive or sensitive across its operating languages and acts upon such content. This corpus is created by Koo in collaboration with government and non-governmental organizations and also based on the frequency of their use and context as seen on the platform. Koo uses self-developed automated tools to identify and remove such content. This is an effort to reduce abuse and promote a fair use of language among our users.
  • Koo is currently experimenting with various vendors to develop cloud-based AI models to create visual content moderation tools especially in the context of nudity and child sexual abuse. 

While we take utmost care and exercise due caution in our content moderation practices, on occasion we may make a mistake. If you are of the view that your content was removed in error and wish to reinstate the content, you are welcome to submit an Appeal for reinstatement and we are happy to reconsider.

Similarly, if you are of the view that your profile has been incorrectly acted upon, please use this form for Reinstatement of Profile

Diretrizes de moderação de conteúdo

A abordagem da Koo para a moderação de conteúdo – um manual do usuário

A principal missão da Koo é fornecer aos nossos usuários uma plataforma segura e confiável para microblogs em idiomas regionais. Nosso objetivo é fornecer aos nossos usuários uma comunidade saudável para se envolver significativamente uns com os outros em um idioma de sua escolha. Para nos ajudar a conseguir isso, esperamos que todos os nossos usuários respeitem nossas Diretrizes da Comunidade.

As Diretrizes da Comunidade da Koo são projetadas para promover um espaço seguro e respeitoso para que nossos usuários se expressem, com a mais alta consideração pela liberdade de expressão e expressão dos usuários. A Koo incentiva uma livre troca de pensamentos e ideias entre nossos usuários e, ao mesmo tempo, adere à letra da lei e à nossa responsabilidade legal de remover conteúdo que viole as Diretrizes da Comunidade da Koo.

Esta seção ajudará os usuários a entender a abordagem da Koo à moderação de conteúdo.

  1. Que tipos de ações a Koo realiza em relação ao Conteúdo que viola suas Diretrizes da Comunidade?

(i) Ação sobre o Conteúdo: Podemos remover Koos, Re-Koos, comentários, fotos de perfil, lidar com nomes e/ou nomes de perfil, com ou sem aviso prévio, se forem encontrados violando nossas Diretrizes da Comunidade. No entanto, nesses casos, a ação da Koo não afetará a conta de usuário ou os dados relacionados a essa conta de usuário de forma alguma.

(ii) Ação nos Perfis de Usuário: Caso um usuário viole repetidamente nossas Diretrizes da Comunidade ou se envolva em atividades ilegais, poderemos tomar medidas para restringir seu conteúdo, visibilidade ou interação com outras contas. Em violações extremas e repetidas ou em conformidade com ordens governamentais, podemos remover permanentemente um usuário da plataforma.

Uma descrição detalhada das ações do perfil e sua lógica pode ser encontrada neste link.

  1. Que tipos de Conteúdo são removidos?

Como regra geral, os seguintes tipos de conteúdo são removidos. Também removemos o conteúdo que é objeto de uma ordem de uma autoridade judicial ou outra autoridade habilitada. Tais pedidos podem ser enviados neste link.

(i) Discurso de ódio e discriminação: Conteúdo que contenha linguagem excessivamente inflamatória ou instigante ou provocadora ou humilhante ou outro conteúdo contra uma pessoa, religião, nação ou qualquer grupo de pessoas e que possa causar violência iminente. Críticas ou antipatia podem não equivaler a discurso de ódio e discriminação.

(ii) Terrorismo e extremismo: Conteúdo que promove ou apoia organizações terroristas identificadas por uma autoridade governamental, incluindo conteúdo que dissemina informações em nome de tais organizações.

(iii) Palavras de abuso: Conteúdo que contenha palavras abusivas em cada um dos idiomas em que a Koo opera. Em colaboração com organizações governamentais e não-governamentais, criamos uma lista de palavras de abuso, dependendo da frequência de uso, contexto nos dias atuais, etc. Esta lista é atualizada com frequência.

(iv) Suicídio e automutilação: Conteúdo que contém ou retrata ações de causar danos corporais reais ou morte a si mesmo ou incitar alguém a fazê-lo.

(v) Religiosamente ofensivo: Conteúdo em que nomes, símbolos, emblemas, livros, bandeiras, estátuas ou edifícios proeminentes de uma religião são transformados, danificados, mutilados, ridicularizados ou profanados ou deuses, divindades religiosas, profetas, figuras de proa ou reencarnações e líderes de uma religião são abusados ou termos depreciativos são usados para eles de uma maneira que pode causar assédio indevido a um usuário ou grupos de usuários.

(vi) Violento: Conteúdo que contenha excesso de sangue, gore, órgãos internos ou atos de mutilação, decapitação, espancamento ou dano a um corpo (humano ou animal).

(vii) Natureza gráfica, obscena ou sexual e assédio sexual: Conteúdo que retrata nudez ou atos sexuais, especialmente contendo mulheres e crianças. Além disso, qualquer conduta sexual indesejada em relação a outro usuário, especialmente mulheres. É prudente notar que, em tais casos, sua intenção não importa; trata-se de como o ato é percebido pelo usuário na extremidade receptora. Portanto, encorajamos você a exercer a máxima discrição.

(viii) Informações privadas: Conteúdo que contém informações relativas ou fotos de documentos de identificação emitidos pelo governo, documentos bancários, endereços, ID de e-mail, número de telefone ou outras informações pessoais de uma pessoa ou grupo de pessoas.

(ix) Personificação:

Personificação: Posar como outra pessoa, marca ou organização de maneira confusa ou enganosa. Isso inclui usar o mesmo nome, alça, foto de perfil e/ou postar fotos, logotipos ou nomes de outra pessoa, marca ou organização sem o seu consentimento explícito. Devido à influência excepcional que podem exercer, fingirse como um proeminente líder político, religioso ou social ou um oficial designado pelo governo de qualquer nação não é permitido e pode levar à remoção de conteúdo de personificação ou suspensão ou encerramento da conta Koo.
 

Outras categorias de conteúdo, que podem exigir investigação ou adjudicação ou ordens de autoridades judiciais ou outras, serão removidas após a conclusão dessas etapas.

 ​

  1. Como a Koo identifica a violação de suas Diretrizes da Comunidade

(i) Moderação Humana: No Relatório de Aplicativos – Qualquer usuário registrado pode denunciar possíveis violações das Diretrizes da Comunidade da Koo clicando nos dois pontos no canto superior direito de um Koo/Comment/Re-Koo e selecionando um motivo apropriado para denunciar. Nossa equipe de moderadores analisará o Koo relatado e tomará medidas, conforme necessário.

(ii) Ferramentas automatizadas: a Koo implanta e continua a examinar várias ferramentas de detecção automatizada para ajudar na moderação de conteúdo e manter a plataforma Koo segura:

  • A Koo criou um corpus de expressões, incluindo palavras, frases, abreviaturas e acrônimos que são considerados ofensivos ou sensíveis em todos os seus idiomas operacionais e agem sobre esse conteúdo. Este corpus é criado pela Koo em colaboração com organizações governamentais e não-governamentais e também com base na frequência de seu uso e contexto, como visto na plataforma. A Koo usa ferramentas automatizadas autodesenvolvidas para identificar e remover esse conteúdo. Este é um esforço para reduzir o abuso e promover um uso justo da linguagem entre nossos usuários.
  • A Koo está atualmente experimentando vários fornecedores para desenvolver modelos de IA baseados em nuvem para criar ferramentas de moderação de conteúdo visual, especialmente no contexto de nudez e abuso sexual infantil.

Embora tomemos o máximo cuidado e exerçamos a devida cautela em nossas práticas de moderação de conteúdo, ocasionalmente podemos cometer um erro. Se você é da opinião de que seu conteúdo foi removido por engano e deseja restabelecer o conteúdo, você está convidado a enviar um Apelo para reintegração e estamos felizes em reconsiderar.

Da mesma forma, se você é da opinião de que seu perfil foi incorretamente acionado, use este formulário para Reintegração de Perfil.

 

Leave a Comment

Your email address will not be published. Required fields are marked *