Corporal punishment

I’ve been reading about corporal punishment (eg spanking) lately. Not explicitly going out and researching it; just some of the people I follow happen to be talking about it. So I finally started thinking about the issue.

I was raised with corporal punishment. What I remember about it is that I always felt the full impact of “I have done something wrong and disappointed my parents” before they spanked me. The spanking was a separate bout of shame and humiliation. It angered me. But I couldn’t express that anger because I had done something wrong. Even if my parents would have listened to me at the best of times, which on this issue they wouldn’t have, they wouldn’t have listened in that minute.

I understand the idea of using pain as a disincentive — you use it for creatures that won’t understand any other way. Like dogs — no, dogs understand human emotions. Or horses — no, if you hit a horse, you get a horse that’s scared of people rather than one who acts as you wish. Like a five-year-old human — no, humans at the age of five can read your emotions better than you think, and disappointment is one that cuts especially deep.

Maybe with especially young infants? Except with puppies, we don’t expect them to make proper decisions on their own. We instead restrict them to situations where their poor judgment will not hurt them and is less likely to inconvenience us.

Somehow, we expect both more and less from children than we do from dogs. We expect better decision making, and at the same time we expect worse emotional understanding. And these foolish expectations result in us treating children worse than we would treat animals.

Nim versus Go: syntactic style and friendliness

A while ago I made a post on Go’s programmer-friendliness. Lately I’ve been using Nim, a language that’s often mentioned in the same contexts as Go. The languages have relatively different aims, albeit with a decent overlap.

Nim’s major strength, I think, is in C/C++ interop. There’s a tool to create Nim declarations corresponding to a header file, and it’s pretty trivial to link to C++ libraries. This is unusual — there are about two languages that allow you to interact with C++ types without wrapping each call and explicit serialization. On the other hand, it doesn’t seem quite as easy to use Nim from C++.

Beyond that, Nim has a relatively straightforward type system with some interesting quirks, and it has a lot of metaprogramming options. You have generics, macros, and templates all available. You can call procedures (aka functions) however you want syntactically; len(foo) is the same as len foo, foo.len(), and foo.len. This is convenient — it’s trivial to add methods to an object.

Between the syntactic and metaprogramming flexibility, Nim avoids many of the problems that Go has. It’s easy to do functional programming, for instance. Errors are propagated via exceptions. The libraries seem mostly sane.

There are some similarities between the two languages. They’re both from the most recent wave of systems-oriented languages, which introduce garbage collection and facilities to avoid using it. (D is one of the forerunners. I would call D a third wave systems language, with C and C++ being the second wave and assembly being the first.) But Go is mostly a C-inspired language, whereas Nim is inspired mostly by object-oriented languages. Go has strict pseudo-concurrency systems emphasizing safety in the face of shared state, while Nim offers real concurrency with thread-local memory with an optional shared memory heap.

Nim is a bit less opinionated and offers better interoperation with existing code. This means it’s a more practical choice in some ways. However, Go is simpler and has better compilation speeds. (Nim has to generate C or C++ code and then compile that, which means that its compilation times are relatively bad.) On the whole, especially since my projects tend to be small, I prefer Nim by a fair margin. I have to think more, but it’s less of the language being useless and more of the compiler not reading my mind.

The State of 3D Linux C# Game Engines

I like using high-level languages with static typing and garbage collection. That leaves a handful of options — C#, D, Go, Java, and maybe a couple others. Of these, C# is my favorite. And again, for a number of reasons, Linux is my standard operating system. And I want to create games for my preferred OS with my preferred programming language.

Current options

OpenTK

This isn’t actually a game engine. It’s a high-level SDL binding, so it handles input and sound and a few other things. A lot of other projects use it, so it’s worthwhile to talk about its foibles.

Right now, the latest release has a cute bug: if you try running it with a debugger and have an XBox 360 controller attached, it crashes. Something about SDL_JoystickGetGUID. Yeah, just comment out calls to it and recompile.

Monogame

Monogame runs on Linux. It uses OpenTK. It probably does whatever you want except for mouse input. Actually mouse input works well, assuming you’re only supporting full screen mode on single-monitor systems. Otherwise, well, your mouse coordinates are based on the position across the virtual display rather than the window. (It’s a bug that seems to be affecting only me.) And no, there’s no cross-platform way of getting the window position.

So yes, as long as you don’t need to access the mouse position, it’s maybe okay. I moved on once I saw this issue.

Wave Engine

This is actually a promising option, once they get any documentation whatsoever. But it’s closed source, and there are only a half dozen sample applications, and half the documentation is wrong. (The API documentation isn’t wrong per se, but it’s about the same as whatever Monodevelop would autogenerate for you.) So wait a year, see if the documentation improves.

Aside from that, it requires you to know the game resolution and viewport before you start your game. This is highly annoying. Monogame doesn’t do that. Monogame lets me resize my window whenever I want and updates its viewport accordingly.

Axiom3D

Axiom3D does not support any working input system on Linux. You can choose between SharpInputSystem, which is a dead project that never passed alpha and never worked on Linux, and OpenTK. OpenTK would work, but somehow Axiom3D messed up the OpenTK initialization or window creation, and you get bogus values from it. It might again be fine if you’re always doing full screen, but in windowed mode, you get the origin point changing whenever you resize the window.

It also has terrible documentation.

Delta Engine

Delta Engine is open source and has some level of documentation. (Yay!) It’s available via nuget, but don’t let that fool you — the nuget binaries don’t work on Linux. But it’s open source, and you can build it on Linux with a few tweaks. Annoyingly, you can’t just cd into the directory and run xbuild, but you can open it in monodevelop, fetch the nuget packages, add autofac to the two or three projects they forgot, add in a cast or two, comment out one or two lines of code, and get everything to build with only about ten minutes of effort.

So, yay!

Unfortunately, it looks like the Linux support is a dream at the moment. They hope to add it, but (for instance) their input solutions are full of abstract classes with only Windows implementations and not even a start at a Linux implementation. There is a MonoGame edition, though, and an OpenTK edition. Similarly to the main distribution, the OpenTK release has numerous build errors, so it’s a matter of ten or fifteen minutes to compile it.

Also note that the OpenTK release depends on OpenTK, which has potential issues with gamepads. Hopefully you’ve already patched and built your own copy. Handily (and I say that with the utmost sarcasm), they didn’t see fit to provide their own binaries, so instead of replacing some files in a Lib directory, you’re going to be adding the references by hand wherever you find build errors.

Once you’ve gone through this fifteen minutes of work, you find that the result won’t even run its samples. It’s convinced that it needs to copy OpenAL libraries to its build directory. Once you clear that out, you quickly find that they didn’t test their own OpenTK support at all, and they’re trying to use OpenGL on a window that wasn’t created with OpenGL enabled.

So yeah, if you want to use this engine on Linux, get ready for some intensive debugging. Of their code, not yours. I don’t have the patience to get it working.

Unity3d

You must be a WINE god.

Unreal 4

Unreal Editor apparently runs on Linux, and thanks to Xamarin, you can use C# for scripting. But it’s a far cry from implementing a game in C#, and scripting is an area where I’d rather use Boo or something else that’s designed to be interpreted.

So, the overall state is pretty terrible. Good luck!

Far Cry 4’s women

I’ve been watching Hannah from the Yogscast playing through Far Cry 4. I quite enjoy watching Hannah’s channel aside from how frequently she uses the word “bitch”.

Far Cry 4 has approximately three named female characters that we’ve seen: Amita, Noore, and Bhadra. Bhadra’s a normal teenager, or at least she’s trying to be. But Amita and Noore are in much different situations, and they’re treated inappropriately. Additionally, the fact that all Golden Path NPCs that aren’t named are male is unrealistic — guerrilla organizations tend to be approximately one third female.

Amita is one of the two leaders of the Golden Path. She is positioned as a ruthless hardass optimizing for victory more than individual lives. Sabal, her co-leader, is more concerned with individual survival and the number of lives saved. Yet Amita is frequently emotional, on the verge of tears, while Sabal is only a bit moody. This is counterintuitive. Moreover, Amita’s a warrior, but she has nice clothing, a hairstyle that takes effort to maintain, and an astounding lack of scars. This is unexpected. I’m not saying leaders can’t cry; rather, it’s hard to gain respect from men if you’re seen crying often.

Bhadra, in contrast, looks like she spends less time on her appearance — but we’re fine with that. She’s a teenager trying to live like imported media tells her she should live; she’s not fighting for her life and the lives of others.

Noore is another story. She’s a doctor rather than a fighter, and her family’s being held hostage. It’s perfectly reasonable for her to seem less of a hardass. But. When we’re introduced to her, she’s overseeing a death arena. She must be relatively inured to death by now. Later, we see her stabbing a man to death and telling her underlings to throw the body to the animals. So we’ve good reason to think she’s a hardass, too. But a moment later, she’s back to tears and weakness and being unable to do things on her own. At least her public persona gives her a reason to be dressed up with plucked eyebrows.

On the whole, I’m not impressed. In almost every particular, I’m not impressed. It’s nice that there’s a woman in a position of command, but she’s not at all active in her position, which makes the title rather worthless.

Urho3D networking

Urho3D is a game engine. I’ve just started looking into it, and I know very little about it. So I’m blogging through it to help me understand and to help others understand sooner.

Networking!

I want to create a multiplayer game using Urho3D. I know how to send packets around, but Urho3D has a network-based event system. I’m not seeing docs on it that tell me how to use it. So we’ll take the example application, NinjaSnowWar. Popping open NinjaSnowWar.as, we immediately see a promising function named InitNetworking(), which starts off:

network.updateFps = 25; // 1/4 of physics FPS
// Remote events sent between client & server must be explicitly registered or else they are not allowed to be received
network.RegisterRemoteEvent("PlayerSpawned");
network.RegisterRemoteEvent("UpdateScore");
network.RegisterRemoteEvent("UpdateHiscores");
network.RegisterRemoteEvent("ParticleEffect");

network is a global value of type Network; check the docs for details. updateFps is an interesting thing; we’ll have to look into it later. The rest registers event topics that the application will use.

After that, we see different behavior for client and server. In each case, the application subscribes to certain events (SubscribeToEvent global function). The server calls network.StartServer and the client calls network.Connect. Straightforward enough.

What we don’t see is any sort of player position updates. But we do see an interesting property: network.serverConnection.controls. Looking at the Controls type, we see it holds a collection of button presses, pitch, yaw, and a VariantMap called extraData.

Now, remember that updateFps property? In the client, you set properties on network.serverConnection, and every so often, the engine sends that data to the server. So the updateFps property is a measure of granularity of input — if you tap a key for 0.02 seconds and updateFps is set to 20, the server will see that you held the key down for 0.05 seconds instead.

But we’re not seeing the other side of things — we don’t see any place where the server sends each client the position of each ninja. Why is that? The server is updating each client with the current state of the entire scene periodically. As a default, it’s a huge win — I don’t have to write a line of networking code for most of my synchronization! But I need a way out.

Consider trying to implement Unreal Tournament with this networking system. It works. But two weeks after you release the game, someone’s got a hacked client that tells you when guns spawn and where every enemy is. You, meanwhile, are sobbing into your hands.

I don’t know how to make this sort of thing work at the moment. Presumably you could write your own networking layer from scratch, but that would be troublesome.

Events?

Your entire scene is managed for you. Why do you need events? Well, one thing it helps with is GUI elements. The GUI isn’t managed through the scene, so you have to do something special for it. It also handles requests from the client that aren’t easily divined from which buttons the player is pressing. For instance, if a player is attempting to buy an item, she clicks on the “Buy” button, and the client sends a PlayerBuyRequest event to the server, instead of sending mouse coordinates over and listing which mouse buttons have been clicked.

So. Clients send GUI events to the server. In response, the server sends events that affect things besides the scene. I’m not sure if this includes sounds, though.

Verdict

A scene-level synchronization is thorough and effective, but I worry about the efficiency. A homebrew solution that synchronizes only those variables important for the simulation would be more efficient, but the engine can’t do that for you.

The naive implementation suggests one server per map. If I want to support ten simultaneous matches for my game, I need to spin up ten instances of the server. This model is relatively simple to implement, which is nice.

Depending on efficiency, this is a fast, simple, probably good enough solution. Expect it to fall over and die if you have a very dynamic environment (walls crumbling and tons of physics-based explosions) or an open world. It’s also prone to cheating. But it will get you started, at least.

Why I stopped playing Borderlands 2

Some games try to position themselves as skill-based. Others are power fantasies. Some just want to take you through stories.

In point of fact, many games do more than one of these. Spectacle fighting games tend to let you do very powerful moves if you have the skill, for instance. And most games have a semblance of a story.

Borderlands 2 seems to try to be skill-based. In order to not die, you need to kill enemies that are not terribly easy to kill and can do a fair bit of damage. However, it costs you little to die, you can recover for free if you kill an enemy within a few seconds of reaching 0hp, and you are quickly equipped with a shield that recharges quickly out of combat. And there typically isn’t good enough cover for me to hit and duck — or if there is, the enemies will wait where they are, somewhere you can’t shoot them without abandoning any semblance of cover.

So there are two approaches I’ve found to the game. I can hit at maximum range, where the enemies can hit me but not very often, and run to recharge shields. Or I can charge in and die a lot. If I snipe, I have to purchase ammunition rather often because the enemies all take a large number of shots to kill. If I charge in, this punishes me by respawning me some distance from the fight, reducing my cash by a negligible amount (but refilling my ammunition at the same time), and making me watch a cutscene that I can’t skip.

That’s right: the main cost isn’t an in-game resource; it’s my own annoyance. And getting me annoyed at your game is not a selling point.

The annoyance continues beyond that. You can’t skip the intro cutscene. The first character you interact with, Claptrap, is designed to be annoying — which is amusing the first time through and infuriating by the third. You’re supposed to escort this character everywhere, and that means going slowly because it gets lost when you move quickly. You might travel ahead of it to clear territory, but once it arrives at an area you’ve already cleared, new enemies spawn out of nowhere. Your weapons feel powerless; it takes too many shots to kill anything, and their accuracy is too low to get reliable headshots at any range or even hit things at long range.

The annoyance stems from it avoiding the power fantasy style and failing entirely at requiring skill. Instead of making anything tactically interesting, they seem to have simply added health to the enemies until their playtesters started dying enough. I can deal with tactical skill. For this type of game, I’d rather have a power fantasy. But Borderlands 2 just tries to annoy me.

Go as a programmer-friendly language

At my job, I’ve been using a lot of Go lately, and it’s a bit of a mixed bag. Here are my thoughts so far.

Type system

Most languages have either a rich type system or a weak type system. Java attempts to have a rich type system: it’s on the strict side, and it gives you generics and type constraints to deal with it. It also lets you cast things between unrelated interfaces if you think you know better than the compiler. Haskell has a much more advanced type system where you don’t need to specify types on pretty much anything as long as you use types consistently. C++ gives you templates, and template instantiation conveniently uses duck typing.

On the other end of the spectrum, we have dynamic languages that don’t worry about types at all. Lua, Python, Ruby — they have no facility for specifying what type a value must be. And in the middle, Dart has optional typing — you can specify all the type information you want or none at all.

Go does not have any type system advancements like in C++ and Java. It has no facility in its type system for coding without knowing in advance what types you are working with. And there’s no way to escape its strict type system.

Today I was trying to write a series of protocol buffers to a set of files. Each file had a different type of protobuf, and I had a list of protobufs that had to go in each file. In Java, that would have been easy — I have a method that takes a List<? extends Message> and writes each element to a file. How do you do this in Go? Well, the most straightforward way is to copy your array of *foo.MyProtoMessage into an array of proto.Message and — dear god, I just used O(N) extra memory and time to work around a type system, what am I doing with my life? And you investigate a method to method object refactoring just so you can pull a bloody for loop out of a method that can’t handle two types that it should be able to treat identically, and finally, just before you cry yourself to sleep, you remember that you can write a custom iterator and pass in an iterator factory and a length.

2 January 2006

Perhaps the world would have been a better place if we’d originally standardized on date formatting based on a prototypical date. Regardless, that’s not what happened forty years ago. We’ve been using strftime syntax for bloody ages. Diverging from that isn’t doing anyone any favors. The choice of ordering depends on a non-standard time format; they’d have been better off using 2001-02-03T16:05:06-0700.

All warnings are errors

In Go, it is an error, not a warning, to have an unused variable or import. I wanted to comment out one line of code today as a quick test. In order to get that to compile, I had to remove two imports and comment out four other lines of code scattered around the function. Warnings are helpful; making them errors is hostile.

Errors as explicit return values

Go uses multiple return values for error propagation. C does the same, usually. With C, it’s not so bad — you pass in a pointer to the function, and it puts its output in that pointer. If you encounter an error, you return the error value and you’re done.

In Go, you return a default value plus an error value, or the actual return value plus a nil error value. This gets annoying fast, especially since there doesn’t seem to be a convenient way to specify the default value for a type. The problem is that errors aren’t a first-class entity in Go, with proper support. You pretty much never want to return an error plus another value; you want to return an error xor a valid value. But that isn’t a general use case; it’s a use case that’s rather specific to errors. So the designers of Go chose instead to implement a general thing that makes the special case of errors awkward to use.

All in all, Go is an okay language, but it grates on me. It has some nice syntactic ideas, but overall it’s not fun to work with.

Character development in RPGs

By “character development”, I of course mean the process of making your character more powerful or developed. In early Mario games, this doesn’t happen at all. In Deus Ex, you found weapon mods, improved skills, and installed nano-augmentations in your own body as the story progressed. In Final Fantasy IX, you got four coral rings and went to the cliff outside Gizmaluke’s Grotto and killed dragons for a while. In Diablo III, you hurried the fuck up to get to level 70, then ground on forever to get high level equipment drops.

In one of my favorite games, NoX, progression is controlled by limiting the supply of enemies, money, and equipment. You can be extra thorough and reach maximum level several maps early. You can use as few resources as possible for most of the game in order to purchase the best sword in the game at the last vendor — or you can spend every gold piece as soon as you get it, always trying to stay ahead of the game. Except the game doesn’t have enough useful purchaseable items, so if you want to be terribly resource constrained near the end of the game, you’ve probably bought pretty badass equipment recently or just avoided any semblance of exploration.

This is a relatively controlled type of leveling. It tends to result in only one option for difficulty. Better players will complete the game with ease while worse ones will struggle. Rewarding exploration with items and experience can help to make up for low skill levels. Like grinding, it’s the optimal strategy. Unlike grinding, exploration produces finite benefits, which means you have to put more effort into tuning the difficulty and will still end up excluding some players.

I remember in Final Fantasy IX, my party went from level 25 to level 65 in about half an hour, at which point random encounters became an annoyance — my party could crush everything it found. I would have been happy with them if they were rarer, and even happier if the battle intro animation made it clear that I was so much stronger. When I go into boss battles, the bosses get their own groovy intro sequences; when the odds are even more in my favor, why don’t I? And destroying your enemies with extreme overkill is fun sometimes. But when it gets to be every encounter, and you only get to travel fifteen or twenty seconds on the world map between random encounters, something’s wrong. It’s rather realistic, but it throws annoyances at you constantly.

Diablo III takes a much different approach to grinding. Final Fantasy allows it; Diablo III demands it. You replay levels and gain more experience, but the game doesn’t become easier simply because you’re level 60 rather than level 10. Rather, you have access to a wider array of abilities, and the enemies that gave you trouble before are just as tough as they were, kill you in the same number of hits, take the same number of shots to die.

You defeated the Prime Evil! You killed the seven greatest demons and Diablo himself! You did what the assembled hosts of heaven couldn’t! And then you go back to New Tristram and nearly get killed by a pack of shambling corpses. It’s grinding without the satisfaction. The only thing worth grinding for is legendary items that give you special buffs, and even that is boring.

And conversely, if you managed to get to Diablo at level 1, you could take him out. Apparently the correct thing to do when faced with the most fearsome foe of all time is to identify roving adventurers with murky pasts, sit them in a cellar, and keep them there while your assembled armies take care of the problem. Otherwise your problems will just grow uncontrollably. It’s absurd and unsatisfying.

That’s two bad examples of grinding and one example of not grinding. I’ll speculate about how to implement grinding properly next post.

Zelda, sexism, and narrative explanation

Zelda, the princess of Hyrule and the wielder of the Tri-force of Wisdom, is notable for several things. One of the foremost of these is how often she is kidnapped. Some people argue that the frequency of her kidnapping is not indicative of sexism. She rules a country and holds a powerful magic object, both of which are reasons for people to kidnap her.

This sounds reasonable on the surface — she’s kidnapped because of her position, not because of sexism. But only the worst writers make things happen in a story without providing at least some semblance of a reason for it. The objection seems to indicate that a writer can’t include justifications for the story’s sexism. If a story is extolling the virtues of a traditional 1950’s marriage with the associated social roles, and in its universe women are less intelligent than men and naturally better cooks, that doesn’t make it non-sexist; it just means the writer spent some extra time and effort making a consistent story.

The first thing we have to ask is, would the kidnappers most reasonably get what they want by kidnapping Zelda? If not, then it’s crudely written sexism. If kidnapping Zelda would be a smart move for the kidnappers, then they’ve been written as Level 1 Intelligent Characters at least. It’s suggestive that the writers have some skill. But that alone barely starts to stave off sexism.

Our followup questions focus on differential treatment of female victims and characters in general. Are women victimized more than men in the story? Even when victimized (and even more when free and unharmed), are they treated as Level 1 Intelligent Characters rather than plot devices? Are they allowed to plan and to respond to events as any reasonable person in their position would? Or do they spend the entire game waiting for someone to rescue them?

Zelda is at least sometimes depicted as able to fight. She can turn into Sheik, who is pretty stealthy. She’s an adept magic user. She should be able to free herself of most reasonable prisons and assist in freeing herself otherwise. For instance, she seems to have some telepathic abilities; if she trained those, she could coordinate her own prison break and in the meantime continue running her kingdom.

After being kidnapped twice, she should be wise enough to invest in a skilled personal guard and keep them with her at all times. Link can be the captain of her guard. Additionally, finding a body double should be a high priority. Telepathic abilities or intensive training and hand signals would allow her to rule via body double. Or if her magic permits, she could have an illusion of herself in the throne. Body doubles and guards are standard techniques to secure a ruler; someone with the Triforce of Wisdom should come up with the idea in half a second.

Making all your characters Level 1 Intelligent Characters saves you from some types of sexism. It doesn’t save you from all of it. A fictional world might itself be sexist. A Level 1 Intelligent 1950’s Ideal Housewife is not automatically a feminist ideal. Every aspect of the world was put into it by the author. The Clockwork Rocket is a book that contains a fair bit of sexism — but that sexism was placed deliberately, and the author makes sexism and dealing with sexism a central aspect of the story.

In the Dragon Age series, soldiers rape women. This happens in the real world. However, the choice to include this in the Dragon Age world was deliberate. It is not a central point of the plot, as far as I can tell; it merely underlines how callous the soldiers are. It’s an afterthought, no more essential than a villain’s goatee.

What’s even worse is when the sexism built into the world is a pure reflection of the writer’s unthinking biases. Lara Croft being traumatized and nearly raped so we can sympathize with her pain, for instance. Amita in Far Cry 4 has an appearance befitting a model or an actress — I’m sure her long hair never gets in the way when she’s fighting in a storm and never gets caught on brambles she’s sneaking through. Her carefully plucked eyebrows, full lips painted pink, and radiant skin speak of pure practicality. And her lack of scarring shows us clearly how much time she’s spent fighting for her people. This is another type of casual sexism inserted into a game: aside from attire, nearly every woman must look like she just stepped off a runway at a fashion show.

It’s quite difficult for one individual to avoid sexism in one work of appreciable size. It’s much harder for a team to do so, if they’re not coordinating and checking each other’s work. We expect some to leak through. But oftentimes, it feels like nobody even began to try.

“I’m not sexist, I’m an asshole”

I saw a post over at the Daily Dot about a Computer Engineer Barbie book, one that’s totally sexist and doesn’t even pretend to be empowering or to portray Barbie as a capable being.

That’s horrible enough. The comments were another source of grief. In the story, Barbie enlists the help of more experienced developers, who are both male and who insist that they can do the work Barbie is trying to do faster. The story’s reviewer said that that smacked of every time one of her male colleagues talked over her or was otherwise dismissive of her abilities. One commenter, Brandon DuBois, responded:

This part shouldn’t be considered sexism, get over yourself and realize that programmers in general are conceited assholes who think their code is always better than yours (because honestly, after the fact anyone can write something better – if they can’t then nobody learned anything). This happens to all of us :p We also make fun of each others weights, joke about sexual preferences, and ridicule each other in plenty of other ways like making memes or photoshopping pictures each other posts on facebook – doesn’t matter if you’re a man or a woman in this industry, that’s just how most of us act.

Well. DuBois is certainly a conceited asshole. If I were his manager, I’d be sending him to HR on a weekly basis.

This is a common defense. My behavior’s not sexist/racist/classist/ableist; I’m equally shitty to everyone! Except that’s probably not true. DuBois probably treats his female colleagues as if they were less capable and less experienced than his male colleagues. That’s beyond consensual mockery. The fact that his verbal assholery extends to men doesn’t make the content less sexist, homophobic, or fat-phobic, and it doesn’t reassure his colleagues that are female, homosexual, or overweight that he really doesn’t care about those attributes. Quite the opposite.

This is a problem in tech. These people are the problem. They don’t think they’re a problem. They probably don’t care if they are a problem as long as they aren’t getting flak about it. And we need them gone.