According to the Spotify statement, the council’s mission “is to help Spotify evolve its policies and products in a safe way while making sure we respect creator expression.” The intersection of freedom of speech and misinformation is a big deal for companies that publish content of any kind. In Spotify’s case, Joe Rogan’s podcast has put the company in a critical situation. Some health experts said Rogan was spreading misinformation, but others say he was using his free speech rights. Whatever it is, Spotify is now trying to regulate the content on its platform better. After the controversy broke into the public, some artists removed this work from Spotify. However, the company later said it didn’t want to be a “content censor.”

Spotify’s Safety Advisory Council shapes and moderates content policies

Spotify says its Safety Advisory Council members will guide the team over “policy and safety-feature development.” Their feedback helps Spotify to shape high-level policies and internal processes. Also, they won’t make enforcement decisions about specific content or creators. Of course, the members of this council have been providing Spotify with their feedback for years. But now, they are gathering together in the form of an advisory council. This team will guide Spotify content moderators’ equity, impact, and academic research approach. Spotify also says it wants to expand its council with more members in the coming months. Spotify is not the only platform trying to balance free speech and content moderation policies. Almost all social platforms have the same problem. For example, Twitter faced a lot of backlash after removing Donald Trump’s tweets. It’s a fact that many people may use the free speech rules for spreading misinformation, whether knowingly or unknowingly. Forming an advisory council and consulting content experts is the best way for social platforms to balance content moderation policies with freedom of speech rights.