xxx: OK, so, our build engineer has left for another company. The dude was literally living inside the terminal. You know, that type of a guy who loves Vim, creates diagrams in Dot and writes wiki-posts in Markdown… If something - anything - requires more than 90 seconds of his time, he writes a script to automate that.
xxx: So we’re sitting here, looking through his, uhm, “legacy”
xxx: You’re gonna love this
xxx: smack-my-bitch-up.sh - sends a text message “late at work” to his wife (apparently). Automatically picks reasons from an array of strings, randomly. Runs inside a cron-job. The job fires if there are active SSH-sessions on the server after 9pm with his login.
xxx: kumar-asshole.sh - scans the inbox for emails from “Kumar” (a DBA at our clients). Looks for keywords like “help”, “trouble”, “sorry” etc. If keywords are found - the script SSHes into the clients server and rolls back the staging database to the latest backup. Then sends a reply “no worries mate, be careful next time”.
xxx: hangover.sh - another cron-job that is set to specific dates. Sends automated emails like “not feeling well/gonna work from home” etc. Adds a random “reason” from another predefined array of strings. Fires if there are no interactive sessions on the server at 8:45am.
xxx: (and the oscar goes to) fucking-coffee.sh - this one waits exactly 17 seconds (!), then opens an SSH session to our coffee-machine (we had no frikin idea the coffee machine is on the network, runs linux and has SSHD up and running) and sends some weird gibberish to it. Looks binary. Turns out this thing starts brewing a mid-sized half-caf latte and waits another 24 (!) seconds before pouring it into a cup. The timing is exactly how long it takes to walk to the machine from the dudes desk.
xxx: holy sh*t I’m keeping those
Original: http://bash.im/quote/436725 (in Russian)
Pull requests with other implementations (Python, Perl, Shell, etc) are welcome.
Found this tool which exposes your Google Drive as a FUSE mount, allowing you to copy to and from your drive as if it was a directory on your desktop. It is slow, though.
So, after finally fixing my environment, and manually having to use 3.17 kernel, I have a running environment, but Dungeon Defenders still hangs, and Dota 2 has graphic rendering issues – meaning I miss the opponents and they can creep up behind me, along with enemy grunts.
Guess I won’t be playing anything Steam-based for a while….
I think I have figured out why my machine has been playing up.
In both cases, my machine was trying to run kernel 3.19, but after checking Kernel.org, I found that this kernel version has been marked EOL.
I installed Fedora 21, which came with kernel 3.17 and worked, but after updating, it stopped working, with kernel 3.19. Forcing it to run on 3.17 was okay, though.
Latest kernel release is 4.0.4, so I need to wait for Fedora to update.
Interestingly, it could also explain why I was also having trouble with Ubuntu, as it also ran 3.19. When trying to reinstall Ubuntu from the latest install image, it hung, presumably because it was trying to use the 3.19 Kernel. In theory, I could use an older installer (e.g. Utopic Unicorn) instead.
So now, I’m running Fedora with a 3.19 main kernel (which fails) and 3.17 secondary kernel. I was going to file a bug on Kernel.org, but found out about 3.19 being EOL, which means no bug fixes will be released, so there is no point in filing the bug.
On the plus side, my machine seems SO much more zippier running Fedora. Although trying to run Dota 2 seems a bit quirky. Dust: An Elysian Tail worked pretty well, as did Second Life (I was able to crank Singularity Viewer up to Ultra without major speed loss).
I still need to reinstall BOINC and any other missing apps I might have, and get used to using yum, yumex and dnf instead of apt-get, aptitude and synaptic all over again, but apart from that, it should be all good.
Well, something weird happened with my Ubuntu Studio installation and I was getting weird scheduling errors, so I tried Debian, but ran into the known issue with libc, so switched over to Linux Mint for now, even though that’s running on an older Ubuntu base (Trusty).
Hmm… well, whilst Linux Mint runs really well, and hibernate works perfectly, it doesn’t seem to play Steam games very well – severe graphics corruption, although Second Life seems to be fine.
I’ll try Debian again, then drop back to Ubuntu again if all else fails.
Build script now builds correctly under Utopic. Also modified the Jenkins job to include the commit version so you can see the commit active at the time of the build.
As you may recall from an earlier post, I discovered my broadband connection at home was horrendously slow compared to my 4G/LTE connection on my phone. Now, I regularly tether my laptop to my phone and enjoy download speeds in excess of 1.2Mbps, compared to 300-400kbps over my home broadband. However, if you try to turn the phone into a WiFi hotspot, then my MNO (Three) doesn’t like it and asks me to pay £5 for a 2GB allowance. However, there doesn’t appear to be a restriction on physical tethering (and I’ve downloaded more than 2GB).
So, the question is, is it possible to tether my phone to my laptop and share that connection to other machines on the network? I suspect so, but it will involve me tinkering with my internet settings, and disabling settings in my broadband router, so it behaves more like a hub than a router.
My thoughts are (and this is subject to my tinkering):
Configure my broadband router to not issue IP addresses -- not necessary if you have static IPs on your network.
Configure my laptop (which has the phone tethered) with a DHCP server so that it does issue IP addresses. Again not necessary if you have static IP addresses everywhere.
If you have static IP addresses everwhere, change the default gateway to be the IP address to be the machine with the tethered phone (laptop in my case)
Configure my laptop to route out packets via the gateway -- notably, to switch on IP forwarding. From brief researching, this might require kernel recompiling, or at the least, module inserting via insmod or modprobe
Make adjustments to the firewall (ipchains) to allow IP masquerading/NATting, preferably utilising security lock down, so not anyone can access the net via my phone.
If I can tear myself away from my newly found Final Fantasy XIV questing, I may try messing with my settings and see if I can get this to work.
If you are like me, and have a slow and/or unreliable internet connection, trying to upload any reasonably-sized video to YouTube can be a nightmare, forcing you to have your computer on for hours on end, and then finding your upload failed because your connection dropped, and then having to start all over again.
Well, one way to have resume protection is to use a middle-point, which is Amazon Web Services, or a similar cloud-based provider, then using that to upload to YouTube. Since the connection between the cloud system and YouTube is likely to be more reliable (and faster) than your connection, the upload from the cloud system to YouTube will be faster.
The first step is to setup and start an instance on AWS. I am using the Ubuntu image.
SSH into the instance and install supporting packages via apt-get or aptitude. Make sure you change the IP (xx.xx.xx.xx) and the key (AWS_Ireland.pem) to match your files.
We can now start using the Google services, but first we need to authenticate. This is normally done via a browser, but since we are in a terminal, we skip this.
$ google youtube list
Please specify user: [enter your email address here]
You will see a text-version of the login page. Don’t bother entering your values. Just press ‘q’ to quit and confirm exit. Then, you’ll see in the terminal window, a url along the lines of this:
Please log in and/or grant access via your browser at:
[www.google.com/accounts/...](https://www.google.com/accounts/OAuthAuthorizeToken?oauth_token=){hidden}&hd=default
Go to that url and sign in. Then, come back to the console and press enter. If all goes well, you should see your video uploads in the console window.
Now, to upload a video to the AWS instance. You can use rsync for that, and the command to enter into your local terminal is as follows (change the key file to match yours and the IP address field to match your instance’s IP):
This uploads the video called “source” onto your EC2 instance at the home folder of the default user (if you have another location in your instance, use that here). Rsync will allow you to resume uploads via the P switch. When the rsync command successfully completes, you can then SSH back onto the instance, and use the “google youtube post” command to upload your video onto YouTube.
NOTE: On some large files, rsync breaks on resuming with the error message “broken pipe”, if this happens to you, see this page (specifically, Q3).
Once your video is uploaded to your EC2 instance, you can then upload that video to YouTube by using this:
With my new laptop up and running, I am happily playing several Steam-based games:
Dust (I am pretty impressed by this. It is a platformer, side-scrolling game with really nice visuals, voice-acting and insane combo options)
Dungeon Defenders (yet to start this under Steam, but played it extensively under Android)
Dota (downloading)
Wakfu (played on my old 32-bit box, but Steam refuses to start the game if you are running 32-bit arch. With a 64-bit arch, it will start)
I also have some extra games that I’m not playing or stopped playing:
Bastion (tried it and it is interesting, but I’m finding it really difficult to get into
Dungeon Hearts (slow and laggy)
Ravensword: Shadowlands (an interesting 3D, but very little customisation options)
Toribash (Appeal wore off very quickly)
Although, I have them tied into the Steam framework so I can start them up via Steam and grab screenshots, etc.
I have also looked at Fraps and Kazam – two screen recorders (Fraps for Windows, Kazam for Linux) and both perform pretty well. Fraps performs great, I got smooth high-quality video out of WoW. Kazam, I am tweaking settings. It also generates good-quality video, but I need to play with the framerate and encoding settings. Look out for some uploads at some point in the future.
I have started posting up my builds of Docker.io. They are unofficial, and unsupported by the community, pending official support and code release supporting 32-bit architectures.
Some changes to the Docker.io code has caused the build script to fail, this was down to the code now using btrfs to build a driver. It has taken me a while to figure out how to fix that error message, but the script now works. You have to add this chunk of code anywhere before the main docker build
git clone git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git
mv btrfs-progs btrfs #Needed to include into Docker code
export PATH=$PATH:$(pwd)
cd btrfs
make || (echo "btrfs compile failed" && exit 1)
export C_INCLUDE_PATH=$C_INCLUDE_PATH:$(pwd) #Might not be needed
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$(pwd) #Might not be needed
echo PATH: $PATH
cd ..
Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.
Right now I run three different containers – one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.
This is what I like about studying for certifications. They force you to look into subjects at a deeper level than you may otherwise have done. One of the topics in LPIC-2 is Kernel maintenance - understanding the kernel, how it works, the concept of dynamically loaded modules, compiling the kernel and modifying the configuration prior to compiling. It is very intriguing learning about this low-level part of the Linux OS.
It is a complex topic though, and compiling a kernel can take a while, depending on configuration. I managed to compile and install my first kernel today. :)
Meanwhile, since I’ve now obtained my SUSE 11 CLA from Novell (not sure if anything will come through in the post, though), I must really refresh my memory on openSuSE. My primary distributions have been Ubuntu and Fedora.
Whilst downloading various linux distributions including the latest versions of Ubuntu and Fedora, I found out that I could download SteamOS, which is Valve’s own distribution for running its Steam platform. Problem is it only supports x64 which I don’t have (yet)
It would appear Microsoft are looking to port .NET onto Linux and Mac. Whilst Linux already has a replacement open source framework for that, this news is obviously raising eyebrows and questioning Microsoft’s long term goals.
Former Microsoft CEO Steve Ballmer became infamous in 2006 after leading a Microsoft Windows meeting in a chant, “developers, developers, developers.” While the images of him clapping his hands and screaming became the target of the early social media and YouTube culture, he was right with his intention. Developers are the masters of the universe (at least in the world of software), and Microsoft gets it.
Today the company is making a rather big announcement: It is open sourcing the server side .NET stack and expanding it to run on Linux and Mac OS platforms. All developers will now be able to build .NET cloud applications on Linux and Mac. These are huge moves for the company and follow its recent acknowledgement that at least 20 percent of Azure VMs are running Linux. This struck a chord in the Twittersphere but wasn’t all that surprising when you consider how pervasive Linux is in the cloud.
Interestingly, after upgrading to Ubuntu Utopic Unicorn, the build script I made for Docker.io fails during the Go build. Something inside the Utopic minimal install is not being liked by the Go build script, so for now, you will have to force the LXC container to use Trusty instead.
After several months, I have upgraded my phone from an HTC Sensation to a Samsung Galaxy S5 and the first thing I did was root my G5, which surprisingly was VERY easy, just use ODIN and the ODIN tool. My HTC Wildfire S was super hard to root (had to use the XTC hardware tool), and now I have Titanium Back restoring my apps. I am considering flashing CFW, but I want to make sure I have a Nandroid-compatible recovery installed first.
I also have redownloaded Zombies, Run!, and I am going to try to catch up with the storyline from where I left off. My last mission was "Veronica" back in June (Mission 15), but I'm going to redo the entire Season 3 missions again to be sure.
Zombies, Run now supports (albeit at an experimental level) external media players (somewhat obsoleting my Google Music tutorial), but I haven’t tried this feature and it does note that by enabling this mode, ZR will not allow you to select music lists internally. Not that it matters to me right now, since I have loads of music stored locally on my device.
I have received confirmation of my LPIC Certification. So I expect I’ll get something in the post to confirm this. Now starting work on LPIC-2. Whether or not this changes my employability, we shall see.
Interestingly, I decided to look at some other courses offered by the online course site I got my learning material from and there’s some Ethical Hacking and Computer Forensics courses there. I might look at these courses at some point in the future, perhaps after I finished LPIC-2.