ALL THE LATEST NEWS ABOUT THE BUSINESS OF PC GAMES

Interviews & Opinion

“We want to become a positive part of community management” – Bodyguard.ai on tackling online toxicity

“We want to become a positive part of community management” – Bodyguard.ai on tackling online toxicity

Online communities, we are told, are besieged by negativity and toxic behaviour. How do community managers stay on top of it all? As in all things, machines can help, but getting the right mix of human and AI moderation, and keeping it all up to date, is key.

A company that’s doing this behind the scenes for gaming brands like Team BDS and Paradox is Bodyguard.ai. PC Games Insider met the team at Game Connection Europe in Paris in November, shortly after the Nice-based Bodyguard.ai revealed its inaugural Business Online Toxicity Barometer. Researching more than 170 million comments made in a 12-month period across 1,200 brand channels, it found that nine million interactions were toxic content, and of that number, some 28% were hateful, and 1% were actual threats. Discrimination accounted for over 200,000 comments. Many were spam, scams, frauds or trolling comments – less alarming but still very problematic.

This tallies with a Unity report from the end of the pandemic, which found that two out of three people who play games online experience harassment of some sort. Unity’s research showed that 92% of players think solutions should be implemented to reduce toxic behaviour in multiplayer games.

Arnaud Chemin (left) and James Clements (right) spoke exclusively to PC Games Insider on Zoom to explain what Bodyguard.ai is.

Bodyguard.ai has developed a rules-based AI tool that protects individuals, communities, and brands from toxic online content. It plugs into social media and other community tools, comment sections and the like, and using a set of regularly-updated rules, is quick to react to toxic activity. This isn’t just about deleting harmful content – when we followed up our Parisian meeting with a detailed Zoom call, the team are keen to point out that this support is good for the mental health of human community managers, who are still needed. Arnaud Chemin (Head of Gaming) and James Clements (Commercial Operations Manager, UK) also emphasise that it’s important to recognise what positive activity looks like and make sure that it doesn’t get drowned out.

Please tell us about your background and how you arrived at Bodyguard.ai.

Arnaud Chemin: My journey is very much in gaming. I have over ten years of experience in the industry, starting my career at Ubisoft as a market expert. I looked at competition trends, tech, and everything that could help production teams and management to make a data-oriented decision. Very, very interesting stuff. And this is how I arrived at Bodyguard. Because with every social trend and technology, moderation is at the crossroads of that. I became really interested in the social aspect of gaming, and how it impacts the lives of our players. Moderation is a huge part of this.

Gaming is not only one of the biggest entertainment markets but also probably one of the biggest opportunities for good moderation solutions
Arnaud Chemin

I met with Charles [Cohen] and Matthieu [Boutard], the founders of Bodyguard, and we agreed to work together because gaming is not only one of the biggest entertainment markets but also probably one of the biggest opportunities for good moderation solutions. Why? It’s because gaming is a social activity and social media in itself. It’s also mainstream now. Everybody plays. And as a result you can see all kinds of communities being created. But for it to thrive, it needs to have proper moderation in place.

I joined Bodyguard a year and a half ago to build this gaming vertical for the company. Our goal is to help publishers, developers, esports teams, and every stakeholder in the gaming industry address and manage toxicity issues.

James Clements: I have over ten years of experience in cloud and SaaS technology and arrived at Bodyguard just a little over four months ago in preparation for launching into the UK.

I’m covering all of the verticals within Bodyguard, one of them being gaming. I don’t have a huge amount of direct experience within the gaming sector, but I am a gamer myself. I love to talk to these companies and really get a feel of the industry with the help of Arnaud and Camille, who are based in France. We’re really hoping to make big strides in the UK.

When Bodyguard.ai was founded, it wasn’t just about games. Can you tell us a little about the company’s background and its vision for the games market?

James Clements: Bodyguard was founded by Charles Cohen. He is a computer genius who is passionate about the impact of social networks in our lives and the technological challenge of moderating online content. In 2017, he laid the foundation for what would become Bodyguard.ai technology, a unique contextual and instant moderation solution. The following year, he launched a free mobile app for anyone to protect themselves from online hate. In 2019, he built a team to bring his moderation solution, a technology that identifies and blocks 90% of toxic user-generated content on social networks (Twitter, Facebook, Twitch, YouTube, and Instagram) and platforms. His vision for Bodyguard now is to have this for all large companies that have an online community. The value of it is to, of course, protect people – not only the people who are on the receiving end of it, but also the people who are moderating it as well.

It’s important that we tackle the more serious hate in an automated fashion so that we can identify and remove 100% of that immediately – there some types of content where there are no shades of grey and these should be instantly removed. Of course, it’s very difficult to do that. A huge amount of work needs to go into Bodyguard to make it a proficient platform but we are proud to boast a 90% detection rate for toxic content – far outstripping the performance of many social media algorithms.

Human moderators have such a traumatic time dealing with toxicity online that any tool that can help them is essential…

Arnaud Chemin: Yes. I would say that our ambition as a company is not to regulate the internet.

We don’t want to be someone who dictates what’s being said within the community. Things like criticism need to be there as well
James Clements

You know, we are not the police or the arbiters of the internet, nor of the upcoming metaverses that are around the corner. Our ambition is to offer protection to people who come onto the internet to discuss, to meet, and to share things with others.

The internet is social at its core. What we want is to offer them a safe space so that every opinion can be heard online without toxicity taking over this. It’s a question of freedom of speech, actually. On the internet, as maybe everywhere, the louder you talk, the more likely you’re going to be heard. We want to avoid that. We want everybody to be able to share their own opinion and experience.

This is why we created Bodyguard. As James said, we wanted to take action, and not just by saying, “This is bad, this is terrible.” We put all of our energy into this, to make the internet in general – and gaming is part of the internet – safer, more inclusive, and a better place for everyone.

Do you tackle all kinds of toxicity? What are the biggest threats beyond obvious hate speech?

Arnaud Chemin: Yes, absolutely. Let’s start with the definition. What is toxicity for us at Bodyguard? How do we define it? For us, toxicity is everything that will bring a bad experience to your online journey. It could be, of course, hate speech. It could be insults. But we have more than 20 classifications at the moment. We’re going to add more and more.

Insults, racism, homophobia, misogyny – all this kind of stuff. This is what we classify as hate in general. But we do also have a “junk noise” classification and category of things. This relates to all the comments that are not hateful per se but could still harm your experience. You name it – scams, spam, illegal ads. All the microaggressions you see online. We are all very familiar with it. As an individual or a business, you don’t want to see that on your social networks or in your games. So we do work on all the aspects of toxicity. And I think that it’s very important to tackle toxicity globally, as a complex problem.

This is a mock-up of the Bodyguard.ai web app running on a Mac, from Bodyguard.ai's own press source.

James Clements: We don’t want to be someone who dictates what’s being said within the community. Things like criticism need to be there as well. There’s such a thing as positive toxicity if you like – you know, someone who goes, “What an effing kill!” or something like that, in a game. There’s a positive side to that as well, depending on the context, and I think it’s important to have a classification for that.

When it comes to spam and scams, this is a big point for the gaming sector. When free mobile games rely on in-game currency, for instance, we need to be able to automate the blocking of accounts that are reselling the currencies or reselling accounts – this is a big commercial aspect for these gaming companies, for sure.

You work with Team BDS for esports. You’ve got a relationship with game-maker Paradox Interactive. And outside of games, you work with sports clubs. What kind of companies benefit from this technology?

Arnaud Chemin: To give you a bit of context: gaming is our most recent vertical at Bodyguard. We’re very proud to have a partnership with Team BDS and with Paradox. We are also working with a few of the biggest gaming publishers in the industry because what we’re doing for them is trying to protect their in-game chats from toxicity impacts fan enjoyment and player wellbeing.

Our work here is more targeted at what’s happening inside the game rather than on social media. We also work with them on social media, but the focus is really to protect the community inside games, because they want to protect their players, and create a safe space for them, so that they can stay in the game.If you enter a game, and then you are insulted right away, it’s very unlikely that you’re going to stay for a long time. And, as you know, retention is key in gaming.

One of our longest-standing partners is a French football professional league. They’re committed to fighting racism and homophobia
Arnaud Chemin

Apart from gaming, we are also working in three other verticals - media, sports and brands.

With sports, we’re working with a number of top-flight football clubs and leagues. For example, one of our longest-standing partners is a French football professional league. They’re committed to fighting racism and homophobia, especially in sports. And this is why they are using Bodyguard. We are also working with over half of the top European football clubs in France, in Spain, in Germany, and in the UK as well, of course.

The UK is a more recent market for us. But we still work with the largest multichannel network in the UK and also one of the largest broadcasters. It’s the same in France as well. We’re working with the big French broadcasters because they want to protect their online presence – everything they are sharing outside of traditional TV. People are going on their website, or going on their social media, to comment on news and everything, and they really value moderation for this.

How does it work? Without giving away your “secret sauce”, what is the experience for people using it?

James Clements: The way you want to think of Bodyguard is almost like an extra person within the team, but it’s as quick as a machine. It’s got that human element. It’s built by humans – the rules, the NLP [natural language processing] specialists are constantly building rules, and adding to that.

So it’s built by humans, but it’s as quick as a machine. What that means is this: within traditional moderation machine learning, it is typically built up on keywords, and the big problem with that is you fall close to the line of censorship. If you put in a keyword such as profanity, then it’s going to block all profanity within that space essentially.

But we built rules based on context. We look at: how is the comment written? Who is the comment written by? Who is it aimed at? Then we start building up a picture through the pre-processing stage. So it’s understanding: are there any emojis that have been used? Are there any sort of mistakes within the wording?

Once we’ve done all this pre-processing and cleaning phase, then it goes through this rule-built process. Currently, right now, we have over one million rules – custom-built rules – by NLP specialists. And that is updated 100 times per day. So again, it’s very important to understand that this is a constant battle. Language is constantly changing across the internet.

To stay on top of that, we need to be first at it. We need to understand the people that we’re working with. Every online community has its own way of talking to each other, which is the same within the gaming space. So we need to understand: how does that work in our technology?

It’s very important that we understand the positive aspect of a community... What we want to try to do is to promote positivity
James Clements

What’s really interesting as well is that once we onboard a new client, what we typically see is a detection rate that’s sort of between 60% to 70%. But then, after only 30 days of using the platform and working with our specialists, that goes up to around a 90% detection rate. The technology learns over time.

Its grasp of language can really improve that quickly?

Arnaud Chemin: Absolutely. To add to this, our vision of moderation is built on three pillars. I would say the first one is “real-time”. Because if a comment appears, and somebody sees the comment, it’s too late. So we really want to work, in real-time, to be able to catch the comment before the harm is done.

This is very, very important for us, and this is where the magic of technology is happening. Because our tech is working so fast. It’s faster than the blink of an eye. You won’t see any harmful content when Bodyguard is protecting a social media page or in-game chat. This is the first pillar.

The second one, James already mentioned. It’s “context”. Context is everything because it’s very different between media, sports, gaming… It’s not the same community, it’s not the same audience. It’s a different age, a different gender, different origins and everything. So you need to understand this context to determine who the content is aimed at and how is it toxic? What is the degree of seriousness, of severity, and everything? So this is key, and a huge part of our technology.

And the last point we already touched on is “customisation”. Moderation, even between two gaming companies, will be different. What we are doing with our clients is that we are listening to them. We are working with them to understand what are their community guidelines and how we can customise our tech to their specific needs.

James mentioned the rules that we are working on. We have general rules for everybody, for our clients. And generally, hate speech is “no”. Racism – of course, it’s bad for every industry. But then we also have rules for specific industries: rules for gaming, rules for sport. It is not one-size-fits-all.

I’ll give you a very simple example. In gaming, when somebody in a game chat is saying, “I’m going to kill you”, it’s probably not hateful! It depends, again, on the game, but if it’s an FPS, it’s the point of the game! So we need to teach our tech not to delete this kind of comment in the game. But if someone is saying, “I’m going to kill you in real life, " the tech needs to understand that even in gaming, it’s still a very serious threat.

So the final layer of the tech is custom rules for our clients because maybe there are specific types of words. You know, in gaming, the names of monsters or characters are everything. And we need to adapt the text so that it really understands this, and that we can properly classify all the comments for our clients.

So these are the three pillars: real-time, contextual, and customisation.

Most social media communities won’t see the comment. It will be deleted or hidden from them in real-time, before harm is done
Arnaud Chemin

What’s the experience for a company starting to use this? How hard is onboarding a new game client, for instance?

James Clements: You want to think of this as an iceberg. You see the tip of the iceberg out of the water. This is the kind of platform you would see as a client, which is just the dashboard with all the analytics. The real tech is the rest of the iceberg that’s underneath the water! You don’t see it. There’s so much behind the tech. That’s what our team does. We work on dialling the tech into your community, understanding your community, and really getting a feel for how the tech could best work within that community as well.

It’s really important that we understand each of the verticals that we’re working in. For instance, if we look at brands – brands have very different needs of Bodyguard than gaming does. This is why it’s important that we understand that, and we use the classifications in the right way, so that people can really dial the solution into what they want to see from the solution, and really get the best possible effect from it as well.

Arnaud Chemin: You have two types of experience when you’re using Bodyguard. You have the client experience, and you have the end-user experience.

For the clients – first, as James said, we have the dashboard; we have analytics; we have everything. But it’s super-easy to connect with Bodyguard. It’s on social media. You just have to know your log-in and password for the social media you want to protect. And that’s it. In five minutes, you are connected and protected by Bodyguard.

If you want to plug it into your in-game chat, we have our own API. Again, with the right developer, you can do this in just a few days. It’s something that is fairly easy to implement within your game. Because it’s not something you need to install within your game, it’s already outside of your game, using the API. So it has no impact on the performance of your game. It’s really, really important because if somebody is sending a comment, and it takes 10 seconds to be analysed and then appear on the chat – it’s not possible. So this is the client’s experience.

And then, the clients have to decide about the end-user experience. This can be very different from one client to another. For social media, it usually depends on the rules of the social media. So for example, on Instagram, let’s take an example, if we classify a comment as hateful, the comment will be removed from the platform, because as the owner of the business page, you have the right to moderate your comments section.

With Twitter, it’s a bit different, because each tweet is owned by the author of the tweet. As a company, you cannot delete a tweet. So what we’re doing is a bit different. We’re trying to hide the tweet, to report the tweet, and see where it goes from there. We need to abide by the rules of social media platforms.

The end-user will be protected from this. Most social media communities won’t see the comment. It will be deleted or hidden from them in real-time, before harm is done.

On videogames, using the API, it could be a bit different because the publisher can do whatever they want. So it could, of course, delete the comment as well, but it could also replace the comment with something else – something more educational, maybe, like saying, “It’s not nice to say that.” Maybe you try to rephrase it or something. So the possibilities are really limitless. It really depends on what you want to achieve from a community perspective. It’s also very important not only to focus on the negative part, but also the positive part.

You analysed a lot of positive comments in order to train your tech. Why is it so important that you understand how positive conversation goes?

James Clements: It’s very important that we understand the positive aspect of a community, because in moderation, people tend to class moderation as negative. It is a negative task to be deleting hate. But what we want to try to do is to promote positivity. To do that, we need to give a visualisation of how positive your community is towards different aspects.

If we take a social media post, we can see how it was engaged with. What was the level of positivity over negativity? And then that gives the team an ability to look into that further, and find that out.

As well as that, though, you can start to engage with your community. You know, engage with the most positive people in your community. And from that engagement, that then drives further positivity, we found.

So having the ability to look from an aspect of: “OK, we’ll build our community guidelines based upon this. Whoever creates x amount of toxicity, we can determine that they’re not valuable within our community. Let’s remove them.” That goes the same for games as well. If someone’s broken their guidelines three or four times, depending on the game, then is that a person that needs to be banned from the game for a certain amount of time? We’re acting as almost a middleman to give people the full picture, let’s say.

Arnaud Chemin: We want to change the narrative around moderation. We want moderation to become a positive part of community management. Why? Because companies tend to focus on the negative. They try to answer negative comments. They try to ban bad actors. But they tend to forget about the good ones. We think that community management is all about positive interaction with your community. It’s about building something together and sharing meaningful content with your community. And you need time for this.

This is actually the core value of Bodyguard. We offer time to the community manager. We take care of the toxicity and the bad actors, and they can focus on what’s really meaningful for the job. We think it’s the next level for moderation. It’s to go in what we call behavioural moderation. So trying to understand how your community is evolving. What’s your community talking about? What is the reception and sentiment about launching a new game, launching new characters or whatever?

It’s a question of mental health for community managers and moderators. It can be very hard to spend all day reading heavy stuff online
James Clements

Moderation is all about that. It’s not just all the toxic stuff. It’s a question of mental health for community managers and moderators. Because it can be very, very hard to spend all your day reading heavy stuff online, so this is also what we’re trying to offer them.

How good is the tech, at the moment, in interpreting what it sees? You spoke a little about how it sees the context and how it improves over time. But how reliable is it as a technology community managers can currently depend on?

Arnaud Chemin: It’s a very, very interesting question because usually, using an AI to do moderation, it’s all about trusting the AI. So the first thing about our take is, we are very transparent tech. We share at every moment with our clients why this decision has been made: the classification, the severity, and everything. So you can understand the tech.

Then to answer directly, “how good is the tech?” – as James mentioned earlier, we detect 90% of the toxic comments online. It means that, of course, you have the 10% that remains. Today, not a single technology is able to detect 100% of the comments. Maybe no human is able to do that!

But we really commit to detecting 100% of the most serious comments and toxic comments. We really want our clients to be sure that all the really bad stuff will be deleted in real-time by the technology.

In football, community managers are so happy about Bodyguard. One of them told us, “Now I can sleep at night during games, or during the weekends, because I know that the tech is going to do all the very important work.”

You still have to human-review for 10% of the content. But this is fine because it also means you can do it when you are ready. For example: “OK, this is Monday morning. I know I need to go through a bit of moderation. I’m ready for it. Let’s do it now.” But during the weekend, you’re good because you know the tech is going to do the job for you.

So I think that, generally, AI is doing better and better. But as we said earlier, it’s the combination of AI and human that makes it so relatable. AI in itself, it doesn’t really learn by itself. It needs to be trained. It needs to be updated regularly.

And this is why, also, we’re not using machine learning like a lot of our competitors. We do think machine learning is great for a lot of stuff, but moderation and understanding context is not one of them. Also, it’s a bit slower to update. You need to train the algorithm to update the data sets and everything. Using our technology, as James said, which is rule-based, we can update the technology on a daily basis. And this is what we are actually doing.

The idea is to keep up with the new trends. If your tech needs four days to be updated, you’re four days late. So we really need to keep up with the new trends and keep up with the understanding of emojis and variations in typing. The gaming community, in particular, they’re very creative when trying to bypass such tools. So we need to be there and to be able to update the tech very, very quickly.

Does that mean you have an army of humans who are updating rules as you go? Is there a big team somewhere working on this constantly?

Arnaud Chemin: Not an army, because as we said, the idea of Bodyguard is taking the best of both worlds. So it’s tech and human. So what we have are NLP engineers. They are trained linguists that are specialised in sociolinguistics. So it means that their specialisation is not to understand the language as it should be used – you know, grammatically correct. It’s to understand the language as it is used online, and in games, today.

So there are specialists in this area. And we have one specialist per language at Bodyguard. So we are covering, right now, six languages. It’s a team of six people based in Nice, in France, at our headquarters. And since they can focus on updating the rules, and the tech is so efficient, only six people is enough to do this job. So we don’t have an army of moderators.

It's not a bad thing, but that’s the past – in our mind, the past of moderation was when you’d use an army of moderators, usually abroad, to do the job manually. We want to move away from this. We’re still using humans because we’re better than a machine at understanding context and nuance. But with AI, we’ll be able to scale and also respect the work of people working for us.

James Clements: As well as that, what’s really interesting is, the more clients that we onboard that come into Bodyguard, the better the tech becomes. We understand more languages. We understand the way someone’s speaking about that particular product. This all filters globally across our system. And that also filters into growing the product, so we can open up new markets.

So for instance, we’ve just hired a new tech for the US, which we’re going to be going to very soon. US English is very different from British English. It’s really about helping the AI interpret what it is seeing. But to do that, we constantly need to have NLP specialists working to really allow the tech to do all of the heavy lifting that is required.

Moderation is not a global thing. It’s a very local thing, and you need to understand the rules, the laws, and the culture of it
Arnaud Chemin

Arnaud Chemin: It’s not only about language differences between the English US and English UK – but it’s also about the cultural aspect of it. I think this is one of the challenges that we do have for moderation, and this is one of the biggest mistakes that the whole industry was making in past years. It’s trying to understand that moderation is not a global thing. It’s a very local thing, and you need to understand the rules, the laws, and the culture of it.

Relating to freedom of speech, for instance. This is very different in the US than in the UK, than in France, than in Asian countries and everywhere. We really need to go deep into this understanding, and, as James said, having these experts that are able to understand what’s going on in their country or area. Because this is where all the mistakes could happen. Maybe in the US they’re more open to firearms, for example. It’s a bit of a cliché, but alternatively they don’t want to see anything related to sexual content. They’re more strict on that one. We need to understand that, and we need to have tech that is ready to be flexible and customisable enough for us to be able to do this kind of adjustment.

Can you give us a sense of what’s happening in the world of natural language processing and artificial intelligence at the moment?

James Clements: Definitely. You know, Bodyguard is a unique tool. We do have competitors who offer machine-learning solutions, or human moderation solutions, who use NLP like we do and build rules based on that. We are a unique proposition for clients.

The biggest sort of challenge that we face – it’s a growing list, I will be honest, but an exciting one because it means we’re constantly evolving the product – is understanding things like irony and trolling. That came on exceptionally over the last few years. People are a lot more clever in how they troll, and as new generations grow up, it’s harder to understand that. So we have to be present with that as well.

We are updating the product on a daily basis with NLP, but also the product side of things. Arnaud mentioned we’re introducing things such as new classifications inside the product, and potential customisation there as well, which means, again, there’s a lot more granularity with how someone can use the platform.

The heavy lifting, really, is always there, so we are able to moderate more than 60 million comments per month. And that’s just the number that we’ve got at the minute! There’s no let-up for that as well. But it is a very complex tool for us to build it, and use the NLP as we do. And translation is a big part of that.

But one thing that makes Bodyguard unique compared to other solutions is that it is a completely proprietary technology. What that means is we can go in whatever direction our clients wish. So if our clients decide, “OK, we want THIS inside of our solution,” then we can put that into our road map, and decide on the best sort of course of action for that.

So coming very soon, we’re going to be opening up to, I think, 40 new languages. This means that there’s a lot of work to be done to understand the dialects of different languages and how that translates back into English so that we can use our NLP rules. The challenges are ever-growing. But it’s one that we’re really excited about.

Arnaud Chemin: What’s really important for people to understand is that natural language processing is a very new thing. We’re creating a new market, but also a new expertise – a new jobs expertise. NLP is at the crossroad between linguistics and technology.

It means that there are very few people on the market that are able to do this. And one of the biggest challenges for us as a company is to find these people, to train them, and for them to be efficient in their job. And I think that’s what is very exciting for the years to come. It’s that AI has made tremendous progress over the past five years.

We recently saw news of very impressive stuff, working with AI, with machine learning, with everything. The question is more how we are able, as a company, as an industry, to leverage this improvement, and make them reliable for our clients. Because a lot of things in this area are trying to impress the general audience with: “You can ask this question to this machine, and it’s going to have a great answer.” And this is true. But the amount of work behind it is huge.

Internally, when we’re talking with our developers and everything, we’re trying to ban the word “magic” because it’s not magic. It’s really a lot of work for different expertise – technology, linguistics, and everything – for our tech to perform at such a high level. It’s not magic. It’s huge work and with a lot of challenges.

But we’re really excited to be working in this area because we’re trying to leverage this technology to make the life of communities safer and better. This is also what drives us on a daily basis. We’re working on trying and achieving something very positive, not only from a technology perspective but from a social perspective.

James Clements: Also, just to round that off, I think this is why it’s really good to have conversations like this because one challenge that we have is the education of moderation. People need to know that something like this does exist. In those larger communities and companies, they probably have a team inside of there. They understand moderation, but maybe they don’t know this exists. But then smaller companies maybe don’t understand anything at all; they’ve just got some person assigned to it that’s not a moderator, who doesn’t really understand it. So it’s good to have this conversation and let people know what is possible.

PC Games Insider first met with the Bodyguard.ai team during Paris Games Week, specifically Game Connection Europe, in November last year.

We’re grateful for James and Arnaud giving up their time to explain the software to us, as well as their company’s visions and values. You can find out more about Bodyguard.ai at the link above. The role of machines in communication is something we’re covering a lot at our events – the next Connects conference is in Seattle and will have a dedicated AI Advances stage.

COO, Steel Media Ltd

Dave is a writer, editor and manager who today is Steel Media's Chief Operations Officer. He gets involved in all areas of the business, from front-page editorial to behind-the-scenes event strategy. He began his career in games and entertainment journalism back in the 1990s when Doom came on floppy disks. You can contact him with any general queries about Pocket Gamer, Beyond Games or Steel Media's other websites, conferences and initiatives.