Protected: Chef Solo For Pro-Liberty Hackers

This content is password protected. To view it please enter your password below:

Protected: General Chef Tips For Pro-Liberty Hackers

This content is password protected. To view it please enter your password below:

Cryptoanarchy Tutorial #0 – What the hell are you talking about?

Below is a list of common computer terms and basic explanations for them.

General Terms
Computer

It might sound silly to define what a computer is, but let’s do it. Computers, at their very basic, are machines that perform mathematical calculations very quickly. The history of computers is complicated, and the very definition is sometimes debated. Some have called the abacus the “first computer” since is increased the speed at which mathemtical calculations could be done. Others reserve the “first computer” claim to the Analyical Engine which was the first to leave complex calculations to the machine rather than humans, and some place it as late as the ENIAC.

Regardless, nearly all of these tutorials will refer to devices that have been common within the last 20 years, and there’s no doubt that they meet the definiton of “computer”. However, this tutorial series would like to point out that “smartphones” and “tablets” are ALSO computers under this definition and this is intentional. Many of the general notes apply equally to these type of devices as well as laptops and desktops.

Desktop

A desktop computer is one that is typically stationary, connected to residental or commercial electrical wiring, and is contained in a case. It is extremely common for these devices to be connected to a network via a physical cable rather than wirelessly, but this is not always true. These devices usually connect to monitors and protectors via cabling, rather than including the monitor as part of the device.

Laptop

Laptops are a type of computer that is designed to be portable and ofer many of the same features as desktops. These devices usually fold and unfold similar to a book, and contain a monitor, keyboard, mouse or touchpad, wireless internet access (via Wifi or cellular networks) and are powered by a battery.

Embedded computer (or embedded device)

Embedded devices are the remaining type of computer, though it is intentionally nebulously defined. Smartphones, tablets, gaming consoles, and devices like Tivo or Roku are considered embedded. They may share traits with laptops, having permanently attached devices that are directly connected, or they may share traits with desktops in that they connect to other devices through the use of external cables.

You forget about servers!

Technically speaking, “servers” describe the role of a computer, rather than a device itself. Servers are any computer that is connected to a network and expected to await and respond to requests for a certain type of data. It is common to refer to servers by their function.  A “web server”, for instance, is a type of computer that waits and responds with web traffic (HTTP – a ‘web page’ – typically). Similarly, an “email server” is one that send, receives, and stores email (SMTP).

Servers can be any computer. It is possible to configure laptops, desktops, and even smartphones to respond to this type of traffic. The only noteworthy statements about servers as a unique type of computer is that many (but not all) servers are physically shaped in a way that makes them suitable for packing several of them into small space. These type of computers are usually very thin, 19 inches wide, and long. Additionally, the computing and network capacity on servers tend to be higher than that of desktops and laptops, but this is not always true.

Privacy

Privacy is the ability to control or limit the spread of information. It is almost always used to describe limiting the spread of information about oneself, or limiting who can access this information, if spread.

Privacy is technically a form of censorship. Many people often associate the term “censorship” with “malicious intent” and “privacy” with “beneficent intent”, but removing the subjective views of those involved, they are indistinguishable. Philisophical disputes arise over where “privacy” and “censorship” become the same thing.

This author (and all later documents) will use the term “privacy” to specifically refer to censoring data that was never intended for public dissemination and “censorship” to refer to an entity (a person, government, or corporation) that is attempting to control or limit the spread of information to the public in general.

This author will leave it to each individual to evaluate and distinguish between the two.

Adversary

When focusing on privacy, “adversary” describes the person, persons, or groups that one does NOT wish to reveal information to. Identity theives, advertisers, and government agencies are the most common types of adversary, but this may also describe the public as a whole.

 

Code

Computers process data in a binary fashion; at it’s core, all computers are merely calculating a rapid series of 1′s and 0′s to represent all data. A “processor” is responsible for these calculations, and “memory” is responsible for storing the numbers so that the processor can access them. Code, then, is nothing more than “a set of instructions that a computer will use to process a set of numbers that it is given.”

Code on a modern computer is incredibly complex, and continues to increase in complexity. Converting 1′s and 0′s into any single character – say “A” – requires many types of code. Storing that character requires more code, retrieving that number even more, and so on, until millions or billions of lines of code are involved in simple processes like “Displaying the ‘A’ character to a user”. Yet more code is required to let users copy and paste this character into another document or web browser.

Modern computers process billions and billions of these instructions every second, and continue to get more efficient at it. This increase in efficiency means that a Nintendo DS is literally more capable of doing complex calculations than ALL of the computers used by NASA for the 1969 moon landing… combined.

Machine-readable

In the early days of computers, a human being was responsible for setting every 1 and 0 manually, by flipping a switch. A single instruction would be completed and the result stored. The next instruction would then be manually entered, and the process repeated. (Fun fact: The reason a flaw in computer code is called a “bug” is that during this era, an insect physically landing on one of the switches could cause the switch to not make a full connection. This resulted in the computer generating a result that was not expected to the human operators.)

Directly inputting the 1′s and 0′s is a process called generating “machine-readable” code. As computers got faster and faster, the instruction for the computer to “do this yourself” began emerging.

On a modern computer, nearly none of the instructions performed by a computer are given to it this way. It does still happen though, which raises security and privacy concerns. There are very few reasons for this type of behavior and in modern times it is done almost exclusively to prevent human beings from studying it and repeating it. This is almost always justified by “protecting patents” or “trade secrets”, but is sometimes directly used to prevent human beings from doing certain things like watching DVDs that were not purchased in their country.

Human-readable

Modern computer programmers have access to so much computing power that the very process of generating code is ITSELF is done using other code. Rather than manually entering 1′s and 0′s, a modern programmer can write something like “if 1 + 1 = 2, then continue. If 1 + 1 DOES NOT = 2, then exit.” This human readable line of text is then converted by other code into the series of 1′s and 0′s needed by a computer to perform these comparisons.

Compiling

Compiling is the process of turning human-readable code into machine-readable code. This requires, at it’s most basic, two things. A “programming language interpreter” to provide understanding of what the human-readable language means, and a compiler: code that interprets the language and data in the code and turns the commands into machine code.

This is an advanced concept, and most people aiming to “just use a computer” will never need to know anything about this process. The author chose to include information about it, however, because it raises some serious privacy and security implications for the average user that will be discussed more fully in “Why open source software is important.”

“Ware” is this going?

Hardware

Computers are complex systems made up of multiple physical parts. All of these physical parts are hardware. Examples of hardware would be “a keyboard”, or “a disk drive”, “a processor”, “memory”, and so on. Discussion about hardware is always about tangible parts of a computer.

Software

Software describes computer code of any type, in any form. It may be machine-readable, human-readable, or some combination of both. Most users are familiar with “Apps” and “applications” or “programs”. Functionally, these are merely other terms for “software”. Most users are not, however, used to seeing code itself, and this code itself is also software.

Firmware

Firmware is somewhat of a fuzzy concept. Technically, firmware is a subset of software that is included in a piece of hardware, and controls it’s basic functions. Hardware devices like a hard drive, for instance, actually contain their own (very limited, by comparison to a modern computer) processing abilities, and use these for basic instruction sets like “read the data saved at this location” and “send that data to the processor”.

Malware

Malware is a nebulous term to descibe “any software that is undesirable to the user”. This includes viruses and trojans, adware, and rootkits. It is derived from “mal” meaning “bad”, and “software”. Functionally, all malware is software, and it is merely the “unwanted” aspect that seperates this from software in general.

Proxmox 3.1 Clustering – pveca: command not found

I’ve been working on a personal project for many months, and have been confounded by delay after delay.

In the past 30 hours, it seems like every issue stopping me vanished. I’ve managed to not only work past an issue on a single server, but managed to get three servers installed with Proxmox and clustered.

It was a little confusing though, especially when following the Proxmox documentation where a cluster is initialized.

pveca: command not found

Dammit, did I mess up THREE installs?

No, it’s just that the Proxmox documentation took on versioning, and in Proxmox 2, the cluster initialization and addition process changed. Yet Google still throws people at the 1.0 docs.

The correct command to initialize a cluster is:

pvecm create clustername

You’ll be replacing “clustername” with whatever you want to call your cluster – mine was “Randland”, in fitting with my node naming convention.

Adding nodes to the cluster is equally simple, using the new command.

pvecm add IPADDRESS

Again, you’ll be replacing that last part with the IP address of the server you wish to add. If this is the first time you’ve contacted other servers with your master, you’ll need to do SSH key verification and password entry. Keyless login will automatically be added, so future logins won’t require a password.

Once you’ve added your node(s), check the status.

pvecm status

Special thanks to Proxmox support forum user “Nicojanmu” for pointing to the Proxmox 2.0 Clustering instructions.

Cluster project, old archives, and more. Of course, “Coming Soon”.

I know I don’t write here often, and I know I have no regular readers, but that’s okay. :)

I’ve been getting back into tech again as a hobby, rather than the day to day stuff that’s mundane at work. I love my job quite a bit, but due to the corporate nature change is excruciatingly slow, even though we’ve got a massive project underway that enables me to make a lot of positive changes.

In my personal space, I have no such constraints.

I’ve been recently concerned more and more with online privacy and not just the “typical” concerns. I actually, for the most part, trust major players in the cloud space. I don’t always like all of the decision they make with regard to features or policy, but I do trust them. I’ve far more concerned with government intrusion into that data, however. So I’m aiming to create a privacy-focused product for average folks. I won’t say much more than that as it’s in Pre-Stealth mode, but it’s going to happen if that’s left to my will and resources alone.

So, my latest project was to build a cluster to house development environments, forward-facing services for myself (I eat my own dogfood!), and eventually, alpha testing to select users. Along the way, I’ve already become a bit more savvy with regard to oVirt and Gluster, and I’m only recently at the point of being able to spin up machines. I’ve also switched my personal Linux workstations from Ubuntu to Fedora. While Fedora is new to me, it’s a well-regarded distro with lots of users and documentation, but they’re not all people who have been using Ubuntu since day one, so I might run across obvious tips to share.

As part of a long-due cleanup, in prep for moving my personal stuff to the cluster, I found an old backup archive for my now long-dead site, Monochrome Mentality. The domain is long-gone, taken by squatters, but I still have most of my content, so that will be put up in some way, shape or form.

New Laptop And Reinstall On The Same Day or “My Favorite Apps”

Yesterday was an interesting day in IT land, as I purchased a new laptop (Eluned) and upgraded my sound card on my living room PC (Athract).

The sound card installation originally borked my install of Windows 7, so I was forced to reinstall. Of course, my backups are centrally managed and this wasn’t an issue for me.

I figured I would take a few moments to list out apps that I consider “critical” – just because.

First and foremost, Google Chrome. I expect most people know what Chrome is, but for those still living in the Dark Ages, it’s a web browser. Much, much faster than Internet Explorer, comes with Flash built in, and is even faster than the next best browser, Mozilla Firefox. It’s got a bunch of nifty extensions as well.

Secondly, VLC. This media player is hands-down the best I’ve seen. More, it’s open source, free, and can be used on just about every platform. It plays damn near every format imaginable, handles video and audio and seamlessly plays content from my DLNA server.

Transmission Remote Dotnet comes next. Transmission is a great torrent client, but it’s not available for Windows. uTorrent is what I prefer on Windows systems but what is horribly lacking all around is a way to integrate it all. So, what I did was install transmission-daemon on my “network hub” and manage it remotely on the PC side. This is what Transmission Remote Dotnet does. Once launched it opens a connection to my server and displays me the torrent list, statistics, and options as if I were managing torrents on this PC. It saves, however, to my 2.5TB fileserver. Even if I shut down my PCs, so long as the server is up my torrents are going. This is great when trying to get a good seed ratio like I value. Additionally, I can expose this to work and add torrents while on the job that download to my home server. Quite nifty!

Dolphin Emulator is something I use to satisfy my cravings for The Legend of Zelda. Paired with Wiimotion Plus controllers and a wireless sensor bar, Dolphin lets me play pretty much any Wii game I want. It’s a little more clunky than actually buying a Wii, but there are certain benefits to playing on the emulator – higher resolution for one.

Duck Capture is a screenshot application. It’s always annoyed me that Windows makes screenshot capture annoying as hell by default. Linux has the right idea – push PRNTSCRN, file is on your desktop. Macs are a little worse, requiring one of three (that I use) multiple-key commands. More flexible by default as it gives you various options. Duck Capture on Windows changes this, combining the best of what Linux and Mac offer. Properly set up (options, change three drop boxes), PRNTSCREEN grabs a snap of the entire screen. ALT+PRINTSCREEN snaps a cropped section of the screen and CTRL+PRINTSCREEN grabs a selected window. There’s another option for “scrolling capture” but I never use it. Once the image is taken, I can press save to have the file put on my Desktop OR upload directly to the web from a single click. Quite handy.

I am anal about properly tagged and organized music. mp3tag is my app of choice to aid me in that. A simple interface lets me pull up and list files, edit metadata, change or apply album art directly to the file (none of this folder.jpg crap!). Even better, it will search Amazon, Musicbrainz or Discogs for all of the appropriate tags and art as well. Finally, the killer feature is the batch renaming (Called “Covert) function that will use my new metadata to rename the file itself in my personal naming convention.

WinCDEmu is one I’ve only recently added to my list. Previously, I’ve used Daemon Tools, and this serves the same purpose. Namely, to take ISO images and mount them like physical disks (only faster). Like Daemon Tools, this supports CDs, DVDs, Blueray images, but it will also mount .bin/.cue files as well as a few other formats I use more rarely. WinCDEmu is open source, completely free of cost. But where it blows it’s competition away is in simplicity. In the old days, Daemon Tools was pretty simple. With the installer now sporting toolbars, homepage redirects, and all kinds of other cruft, the clean straight-forward WinCDEmu rocks. More, it integrates right into Explorer, making mounting ISOs or .cue files a three click process. The interface is simple and the whole application is speedy.

Last (for this post at least) is 7zip, an archive utility. Windows handles .zip files natively, but it doesn’t support .rar files. 7zip handles them well, plus adding in a plethora of others. This is especially useful for a mixed OS household like mine – 7zip gives Windows ability to work with tarballs.

 

ISA Proxies, Authentication, and Linux

I work in a corporate office that is heavily Windows based. In fact, I am one of the two people who have a dedicated machine on their desk running Linux and the other is my manager. I am the only person that focuses exclusively on Linux in the entire office. Needless to say, going from ground up with a fresh install can be a bit painful.

Our office is behind an ISA proxy that requires authentication and sadly, none of the distros I’ve used can handle this on a system level. To use an authenticated proxy on Linux, you need to proxy your proxy connections (Yo dawg!).

The service you need to do that most simply is CNTLM. It is open source and packaged by all of the major distros. Install it using your method of choice keeping in mind that you may need to download it prior to the install depending on what traffic you’re allowed out prior to authentication.

Configuration on my work LAN only requires a 5 line config. The first three should be painfully obvious:

Username testuser
Domain corp-uk
Password password

Username is the username you’re authenticating as (for me it’s my Active Directory sign on. I’m not sure if that’s the entire point of the ISA server or just how ours work. I’m not very well versed in Windows-side complexities.

Domain is the domain of your network. Password is your password.

There are options to generate a hashed password which is FAR more secure in most cases than a plaintext file. I instantly recognize this and if you want to do that the option is there. For me, I don’t care. Every password I use is different and the people that would be “stealing” this password have the capacity to change it at will. Furthermore, I encrypt the drive with a strong key and if that’s compromised, a hashed password is the least of my issues.
The next important line specifies the location of the ISA proxy.

Proxy 10.217.112.41:8080

There are actually two proxy lines specified in the default config, so take note of that. I only have one assigned proxy, so I need to comment out the second one. Hostnames are also acceptable values for this.
The last important line is the listen port.

Listen 3128

This specifies the port that cntlm should accept connections to. This is where your traffic will be authenticated and passed on to your outbound proxy server. Take note of this port.
cntlm is now configured. Configure your OS to start the daemon at boot for convenience. You can do this with either update.rcd or chkconfig, depending.
You should now point all proxy connections that require authentication to localhost on the port you specified in the cntlm configuration. In Gnome, for instance, you would point the Network Proxy settings to 127.0.0.1 and port 3128. Authenitcation is now transparant and survives reboots.

Yum with a proxy on Fedora 17

Because I work in a “mixed shop” when I head into the office, I need to fuddle around with proxies in order to get on the internet at work. Given that I work at an internet company, this means I need to mess around every time I install a system.

Finding directions for getting apt working on Debian based systems is easy. It’s usually in the first or second post. Fedora, however, is a bit different.

Simple, but different.

To get Yum working with a proxy, one must edit /etc/yum.conf and add the following line:

proxy=http://hostname:port

Assuming your proxy is yoshi.lan and the port you use is 8080, it would look like:

proxy=http://yoshi.lan:8080

It seems obvious but it took me several searches and tried before I found this. The advice of editing bashrc and adding the http_proxy variable to my bash profile did not work.