From an IT perspective, hacking is almost exclusively a nefarious activity. But the term has meant different things to different people over the years. Tech lore has it that John Nash--the mathematician played by Russell Crowe in the film “A Beautiful Mind”--first called a lazy, cheating academic a hacker at MIT in the 1940s, thus coining the term as a person who doesn’t used established protocol to get desired results. By the 1970s, a computer geek glossary called The Jargon File had reclaimed the word in a positive light: “A person who enjoys exploring the details of programmable systems and stretching their capabilities, as opposed to most users, who prefer to learn only the minimum necessary." Members of academic hacking communities would prefer that we use the term “cracker” to refer to security breakers and authors of harmful software. Instead, we can call people who ethically test our systems for vulnerabilities “white hat hackers,” and those who illegally break into secure networks or destroy data and functionality within PCs and servers are “black hat hackers.” Many hackers fall somewhere in the middle, or identify with a certain hacker subculture. Hacktivists send political messages with their hacks, while corporations and open source platforms host hackathons to collaborate and teach in order to design better products. We know hacking isn’t all bad, yet hacking costs U.S. enterprises more than $4 billion in damages each year. How did we get there?