A sense of mea culpa can be felt throughout Menlo Park, Facebook’s headquarters, as the company brainstorms how to improve the social network for the next year. When Mark Zuckerberg told the public about his good intentions, he claimed he wanted to “fix Facebook” after realizing his creation, which now counts some 2 billion users, had a lot of work to do to “protect the community from hate and abuse, defend itself from UN interventions, and make sure that time spent on Facebook is quality time.” This process will go through the following key stages:

More friends, less brands

This is undoubtedly Facebook’s most ambitious plan. As the Guardian explains, the social network plans to return to its roots and become a social network again – in the true sense of the words – where human relationships are prioritized above all. This will be done through a complete redesign of the news feed, where users’ posts are currently drowned out by ads and media content. “We believe that human interaction is more important than passive content consumption. Facebook was made to bring people together. This update must help us get there,” John Hegerman, in charge of Facebook’s newsfeed, said. It’s also a way for Facebook to fight against fake news.

Funding against fake news

Speaking of fake news: it’s undoubtedly the thorn at Facebook’s side. The social network isn’t looking for a way out of fighting it. In April 2017, for instance, the company contributed 14 million dollars to News Integrity Initiative, which will endeavor to “address fake news, misinformation and ways the Internet allows users to be informed in alternative ways,” a Facebook memo explained. The initiative is in partnership with the City University of New York (CUNY) and has also received funding from Wikipedia, UNESCO and Mozilla, to name but a few. It remains to be seen how they intend to accomplish their goals.

The hunt for usurpers

Facebook wants to know its users, to the point that it can recognize them without needing their friends to manually identify them. On December 19, it announced that it plans on accomplishing this with some pretty refined AI. Once the technology’s ready, it will send users it recognizes in pictures a notification, because “you are in control of your image on Facebook and you have the right to choose to identify yourself or not, as well as contact the person who posted a picture of you if it bothers you,” the social network explains. Those users who think this particular technology is intrusive will have the option to deactivate it.

An army of moderators

Since Facebook plans to focus on “human” content, it will have to moderate what is posted. As Mark Zuckerberg himself explained on May 3, 2017, people occasionally “hurt themselves or others on live streams or in videos they post later.” He was undoubtedly referring to the live suicides which very publicly put Facebook’s ability to filter content up for debate. The company will hire 3,000 moderators over the course of 2018. These will join the 4,500 moderators already at work in the field, creating a real anti-violence army.