Spent a big chunk of today preparing for, and attempting to upgrade my Pixelbook to Gallium OS.
I imaged it, then made a file backup of my home directory, before installing the OS, overwriting my Ubuntu, then restoring the home directory backup into the newly installed OS and then chowning the directory to me.
As a habit, I then imaged the laptop at this state.
I prepared a semi-automated script to install apps that I had installed on my Ubuntu, which included things like virt-manager, virtualbox, google-chrome and the like.
However, I soon found out that VirtualBox 6.1 seems to crash the mouse driver on reboot and the mouse pointer no longer moves and Gallium doesn't even seem to see a pointer device when you check the mouse and touchpad option. I had to revert back to the image just after the file copy.
There is always the option of installing VirtualBox 6.0 from the Ubuntu repositories rather than the Oracle repositories, which uses a different installation setup. Maybe that will result in a different outcome.
Eventually, I restored back to my original Ubuntu installation so I can retry again tomorrow.
EDIT: Retried again the next day, and found out the sound wasn't working, even on the live disk. Better find out what's the deal with that...
EDIT2: Found out that my Pixelbook model doesn't have working sound drivers on GalliumOS. I guess I will have to wait until that is fixed before using that. I guess I'm staying on Ubuntu. In the meantime, I'm going to see if I can compile a later version of the kernel to see if I can somehow get VirtualBox working better.
After using my Pixelbook Eve on Ubuntu Eoan (19.10), my Ubuntu has started notifying me for an upgrade to 20.04 LTS. So, based on my past experiences of Ubuntu upgrades and how they always break things, I went through the process of backing up my files and making a Clonezilla image of my Pixelbook before even starting to do anything.
Then I went through the upgrade. It went through without any problems, but when it went to reboot afterwards, I was black screened after the Ubuntu splash screen.
I suspect it's because my Pixelbook contains some tweaks via this GitHub repo, and that is still using a 4.x kernel. Last update was in 2019, so maybe it's out of date.
Before restoring my old image back on, I installed GalliumOS which is an Ubuntu-based distro specifically aimed at ChromeOS devices. Then made a backup image of that before restoring the old image back on.
I might try installing Ubuntu 20.04 from clean and see if that has any better Pixelbook support than the older versions of Ubuntu, and make it so I don't need to use the hacks. Bear in mind the hacks used the ChromeOS kernel, and I couldn't do some things like use ufw or gufw. Using GalliumOS should fix that since I wouldn't be using tweaks.
However, there's still an annoying quirk GalliumOS has on my Pixelbook and that's the jumpiness of the mouse pointer -- touch the touchpad and the pointer jumps to that part of the screen, as if the touchpad was a representation of the screen, not a touchpad. It's a quirk that can be gotten used to, but it is still annoying.
So, I install Ubuntu 17 clean on my laptop after the issues I had with drivers and immediately found out that gksu was not installed.
Installed that and tried to
gksudo nautilus
That failed and found out that Wayland had replaced the default of Xorg. Found an old Xauthority file from my backups and copied that back, which allowed me to get the popup window back for my gksu, but I couldn’t click it to enter the password :(
Spent several hours trying to upgrade my Ubuntu installation from 15 up to the latest 17. The upgrade didn’t fail, but I did see a few error messages, and now I have applications failing to start for various reasons, including the settings applet; and when I install or use my nvidia drivers, ubuntu doesn’t start up properly until I do
[code]
apt-get purge nvidia*
[/code]
But removing all the nvidia stuff causes it to fallback to nouveau which for the most part works, but not exactly good for any linux gaming.
Looks like it’s going to be a full-reinstall job to make sure everything is clean :(
Not often I quote from a publication from Ireland, but this was quite an intriguing read. Someone who went from Windows to Mac to Linux (Mint)
Linux is everywhere – and will free your computer from corporate clutches
It was 2002, I was up against a deadline and a bullying software bubble popped up in Windows every few minutes. Unless I paid to upgrade my virus scanner – now! – terrible things would happen.
We’ve all had that right?
In a moment of clarity I realised that the virus scanner – and its developer’s aggressive business model – was more of a pest than any virus I’d encountered. Microsoft’s operating system was full of this kind of nonsense, so, ignoring snorts of derision from tech friends, I switched to the Apple universe.
It was a great choice: a system that just worked, designed by a team that clearly put a lot of thought into stability and usability. Eventually the iPhone came along, and I was sucked in farther, marvelling at the simple elegance of life on Planet Apple and giving little thought to the consequences.
Then the dream developed cracks. My MacBook is 10 years old and technically fine, particularly since I replaced my knackered old hard drive with a fast new solid-state drive. So why the hourly demands to update my Apple operating system, an insistence that reminded of the Windows virus scanner of old?
Apple is no different to Microsoft it seems.
I don’t want to upgrade. My machine isn’t up to it, and I’m just fine as I am. But, like Microsoft, Apple has ways of making you upgrade. Why? Because, as a listed company, it has quarterly sales targets to meet. And users of older MacBooks like me are fair game.
I looked at the price of a replacement MacBook but laughed at the idea of a midrange laptop giving me small change from €1,200. Two years after I de-Googled my life(iti.ms/2ASlrdY) I began my Apple prison break.
He eventually went for Linux Mint, which for a casual user is fine. I use Fedora and Ubuntu (and a really old version of Ubuntu since my workplace VPN doesn’t seem to work properly with anything above Ubuntu 14 - their way of forcing me onto either a Windows or Mac machine)
Steam was one of the many things that broke with Ubuntu 16.04 because of numerous changes in package names and dependencies. Fortunately, here’s a guide to fix that. Now, back to my Dungeon Defenders :D
Looks like my weekend is going to be filled with tinkering again. ^_^;
I need to reinstall windows on my laptop as I think there must be some graphics conflict somewhere and it’s lagging when it gets taxed (didn’t normally). Most commonly, it happens when I’m playing Final Fantasy XIV, but has lagged a bit on Alice: Madness Returns and Hyperdimension Neptunia U: Action Unleashed. I figured it might be my connection, since FFXIV is an MMORPG, so I switched from my WiFi to my 4G connection via tethering and it still lags. I then switched from DirectX 9 to DirectX 11, amd still nothing. I even downgraded my Nvidia driver to a REALLY old version (since Nvidia ran into a huge bug with one of their drivers, if you recall), so I’m planning to run my Clonezilla backup tonight (which should take a few hours since I’m also backing up my Ubuntu install), and then run my Windows install then then boot-repair to get grub back (凸(>皿<)凸 Microsoft)
And then, I have to go through the process of installing drivers and updating Windows, though I will probably skip updating Windows since I only use it as a gaming environment. And downloading my Steam games again. Including the Heavensward expansion, Final Fantasy XIV is probably about 20-30GB. With the spikes and dips in download speed on my 4G, it’s going to take about 3 hours.
Whilst I totally respect Mark for coming out and saying this, that’s not to say that in future, Canonical could be bullied into implementing a back door, or Ubuntu cracked by some untoward government agency….
VIDEO: Mark Shuttleworth, founder of Canonical and Ubuntu, discusses what might be coming in Ubuntu 16.10 later this year and why security is something he will never compromise.
Jack Wallen reviews the bq Aquaris M10 tablet and he’s impressed. If you’ve been on the fence about Ubuntu Touch, this might just assuage those unpleasant feelings.
Canonical tried to do this with their last attempt to crowdsource their Ubuntu phone, but it didn’t make enough money. This one looks pretty good too. Now I wonder if I could run Android apps on there too. :D
It won’t boot ISOs unless you hdiutil it, which is an Apple propriety tool, or the ISO has been EFI enabled already, and since it’s not open source, I can’t even do that beforehand.
The Macbook won’t work with a known good HDMI cable (which I use with a Desktop PC), unless it’s Apple branded - which Apple being Apple, isn’t the least bit surprising…
I’ve tripped over their power supply more than once, and putting it at the plug end make it bulky and ugly.
My first course of action with regards to the setup? Trash MacOS and install Ubuntu. Of course, Apple make things endlessly difficult – I had to hdiutil the Ubuntu ISO to make it bootable, then install Ubuntu. After which, the Macbook wouldn’t boot.
I found out I had to fiddle with the efibootmgr tool to change the boot order, and it works fine now. But then I had to figure out how to right-click on a no-button mouse touchpad. The hack is found on the Debian site (look at the mouseeemu post at the bottom). So now I have a clickable touchpad, with right-click being “ctrl+click”
I think I have figured out why my machine has been playing up.
In both cases, my machine was trying to run kernel 3.19, but after checking Kernel.org, I found that this kernel version has been marked EOL.
I installed Fedora 21, which came with kernel 3.17 and worked, but after updating, it stopped working, with kernel 3.19. Forcing it to run on 3.17 was okay, though.
Latest kernel release is 4.0.4, so I need to wait for Fedora to update.
Interestingly, it could also explain why I was also having trouble with Ubuntu, as it also ran 3.19. When trying to reinstall Ubuntu from the latest install image, it hung, presumably because it was trying to use the 3.19 Kernel. In theory, I could use an older installer (e.g. Utopic Unicorn) instead.
So now, I’m running Fedora with a 3.19 main kernel (which fails) and 3.17 secondary kernel. I was going to file a bug on Kernel.org, but found out about 3.19 being EOL, which means no bug fixes will be released, so there is no point in filing the bug.
On the plus side, my machine seems SO much more zippier running Fedora. Although trying to run Dota 2 seems a bit quirky. Dust: An Elysian Tail worked pretty well, as did Second Life (I was able to crank Singularity Viewer up to Ultra without major speed loss).
I still need to reinstall BOINC and any other missing apps I might have, and get used to using yum, yumex and dnf instead of apt-get, aptitude and synaptic all over again, but apart from that, it should be all good.
Interestingly, after upgrading to Ubuntu Utopic Unicorn, the build script I made for Docker.io fails during the Go build. Something inside the Utopic minimal install is not being liked by the Go build script, so for now, you will have to force the LXC container to use Trusty instead.
NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.
EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.
Steps to compile and build a docker.io binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.
A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult – you can use LXC or debootstrap for that. Here is a quick tutorial on that.
I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable - I can get rid of it each time I do a build.
I utilise several scripts – one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.
EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.
EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.
As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.
Script 1: rebuildDockerLXC.sh
This script does the rebuild of my LXC build environment
First, I take down the container if it already exists. I named my container “Ubuntu” for simplicity.
lxc-stop -n Ubuntu
Next, destroy it.
lxc-destroy -n Ubuntu
Now create a new one using the Ubuntu template. Here, I also inject my SSH public key into the container so I can use passwordless SSH
IMPORTANT NOTE: If you are NOT running Ubuntu Trusty, you MUST use the “–release” option. If you are running on an x86 architecture, and want to compile a 32-bit version, you MUST also use the “–arch i386” (otherwise LXC will pull the amd64 packages down instead). There is a problem with the Go build script with Utopic. Hopefully to be fixed at some point in the future.
Start up the container, and send it to the background
lxc-start -n Ubuntu -d
Wait till LXC reports the IP address of the container, then assign it to a variable for reuse later. We do this by waiting for LXC to report the IP then running ‘ifconfig’ within the container to get the IP as seen by the container. The ‘lxc-info’ command can return two IP addresses – the actual one, and the bridge, and it is not always obvious which one is which.
This script is the wrapper script around the build process. It starts up the build container, runs the build in the container, then pulls the resulting output from the container after the build is done, extracting it to the current folder
First, check if we should rebuild the build environment. I normally do, to guarantee a clean slate each time I run the build.
echo -n “Rebuild Docker build environment (Y/N)? "
read REPLY
case “$REPLY” in
Y|y)
echo Rebuilding docker build environment
./rebuildDockerLXC.sh #If you want to rebuild the LXC container for each build
;;
N|n|*)
echo Not rebuilding docker build environment
;;
esac
for a in ls *.txz
do
echo Extracting $a
tar -xvvvf $a && rm $a
done
Done.
Script 3: dockerCompile.sh
This script is run inside the container and performs the actual build. It is derived mostly from the Dockerfile that is included in the Docker.io repository, with some tweaks.
First, we install the basic packages for compiling
Now the important bit. We patch the source code to remove the 64-bit arch restriction.
for f in grep -r “if runtime.GOARCH != "amd64" {” $GOPATH/src/* | cut -d: -f1
do
echo Patching $f
sed -i ’s/if runtime.GOARCH != “amd64” {/if runtime.GOARCH != “amd64” && runtime.GOARCH != “386” {/g' $f
done
Finally, we build docker. We utilise the Docker build script, which gives a warning as we are not running in a docker environment (we can’t at this time, since we have no usable docker binary)
cd $GOPATH/src/github.com/docker/docker/
./hack/make.sh binary
cd ../../../../../
Assuming the build succeeded, we should be able to bundle the binaries (this will be copied off by the compileDocker.sh script)
cd go/src/github.com/docker/docker/bundles
for a in ls
do
echo Creating $a.txz
tar -cJvvvvf $a.txz $a
mv *.txz ../../../../../../
cd ../../../../../../
done
And that’s it. How to build docker on a 32-bit arch.
I installed Fedora 20 and gave it a test drive. Whilst I was happy it seemed to run well, the graphics driver appeared to be flaky. Under Ubuntu Studio, I was getting a fps fullscreen using glxgears of around 60-65fps. Under Fedora, I was getting ~45 fps. I then tried Linux Mint Debian Edition, and that also had the same problem. So now, I’m back on Ubuntu Studio. But I might be vanilla Debian a go as well and see if that helps…
I dug out my Wacom Bamboo Graphics Tablet and plugged it into my Ubuntu Studio installation, but frustratingly, I cannot seem to emulate a wheel scroll, which I need for my work in Blender. Sure I can use the keypad +/-, but that isn’t the way I’m supposed to work.
I might switch over to Fedora later this week and see if that is any better. Or maybe even put Linux Mint back on. I know that both have gone through new versions since I last used them. Fedora was at Schroedinger’s Cat / Version 19 and Linux Mint was at Maya / Version 13 last time I used it.
Now may be a good time to start looking at other distributions. openSUSE seems appealing, but it has caused me problems with restoring from CloneZilla images in the past, especially cross-operating system.
A quick snippet for syncing your date and time via NTP. I have noticed that Windows and Linux do not follow the same convention by standard, and are always an hour out from each other, even though both claim to follow the same time zone. So, what I am having to do is sync via NTP each time I dual boot.
In Linux, this can be done using cron jobs or using the NTP daemon, but that does not do it frequently enough for my liking. So here is a bash snippet for it:
sudo service ntp stop
sudo ntpdate 0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org 2.ubuntu.pool.ntp.org 3.ubuntu.pool.ntp.org 0.uk.pool.ntp.org 1.uk.pool.ntp.org 2.uk.pool.ntp.org 3.uk.pool.ntp.org ntp.ubuntu.com
sudo service ntp start
The first line stops the NTP daemon, since the ntpdate command does not like it when it is running (port in use). The second command uses a server in the selected list to sync with. The final line restarts the NTP daemon.
The Windows (Windows 7) equivalent is very similar. Like with linux, it has an in-built sync facility, but it again does not sync often enough for my liking. Like with the bash script, the commands must be run with elevated rights, so you must “Run as Administrator”, or run from an elevated rights command prompt, which you do as follows:
Click Start, type "cmd" into the command window (do NOT use Windows+R)
Hold down CTRL+SHIFT, then press ENTER
You will be prompted (if you have UAC active), OK it and you will get a command prompt with "Administrator" in the title.
This code starts/restarts the Windows Time service then configures it with a pool of NTP servers, before asking the service to update itself and then resync. The resync action is what adjusts the time.
The recent linux kernel update has screwed up my admin accounts on both my Ubuntu-based boxes (Lubuntu & Ubuntu Studio). I’ve spent three hours creating a new user, making them sudo-enabled, then moving my files from my old profile to my new one and tinkering with a few scripts and desktop shortcuts that were still pointing at the old home directory.
Mind you, it’s given me an opportunity to test my LPIC-1 knowledge.
I’ve been running with Ubuntu on my desktop for a long time, even after upgrading it to 4GB RAM (it’s a really old PC). Nonetheless, Lubuntu (which is Ubuntu with LXDE) prompted me to upgrade from Raring to Saucy. I did, and as with all Ubuntu upgrades, it took absolutely ages to complete. But after a reboot, I noticed the login screen is now identical to the LXDE login screen of my Fedora box (which is also using LXDE). This is good and bad – good in that it gives users a consistent login experience regardless of distribution, but bad in the sense that the identity of Ubuntu has been slightly lost.
So ends a crazy month. We’ve broken records, we’ve been written and talked about across the world, we’ve worn out our F5 keys, and we’ve learned a lot of invaluable lessons about crowdfunding. Our bold campaign to build a visionary new device ultimately fell short, but we can take away so many positives.
We raised $12,809,906, making the Edge the world’s biggest ever fixed crowdfunding campaign. Let’s not lose sight of what an achievement that is. Close to 20,000 people believed in our vision enough to contribute hundreds of dollars for a phone months in advance, just to help make it happen. It wasn’t just individuals, either: Bloomberg LP gave $80,000 and several smaller businesses contributed $7,000 each. Thank you all for getting behind us.
Then there’s the Ubuntu community. Many of you gave your time as well as money, organising your own mailing lists, social media strategies and online ads, and successfully reaching out to your local media. We even saw entire sites created to gather information and help promote the Edge. We’ll be contacting our biggest referrers personally.
Most importantly, the big winner from this campaign is Ubuntu. While we passionately wanted to build the Edge to showcase Ubuntu on phones, the support and attention it received will still be a huge boost as other Ubuntu phones start to arrive in 2014. Thousands of you clearly want to own an Ubuntu phone and believe in our vision of convergence, and rest assured you won’t have much longer to wait.
All of the support and publicity has continued to drive our discussions with some major manufacturers, and we have many of the world’s biggest mobile networks already signed up to the Ubuntu Carrier Advisory Group. They’ll have been watching this global discussion of Ubuntu and the need for innovation very closely indeed. Watch this space!
As for crowdfunding, we believe it’s a great way to give consumers a voice and to push for more innovation and transparency in the mobile industry. And who knows, perhaps one day we’ll take everything we’ve learned from this campaign -- achievements and mistakes -- and try it all over again.
Thank you all
Mark Shuttleworth, the Ubuntu Edge team and everyone at Canonical
P.S. We’ve been assured by Paypal that all refunds will be processed within five working days.
This is a very useful boot disk - it allows you to download the latest network installer from the relevant site and boot it, without having to burn or create another stick. It supports the major distributions: Ubuntu, Debian, Fedora, openSUSE, Mandriva, Scientific Linux, CentOS and Slackware.
Be warned, though, Network Installers by nature can be heavily console-based.
As a result of the Ubuntu Forums hack recently, I’ve now had to spend several hours going through all my internet logins accounts to see whether or not I have used the same password anywhere else. Not surprising, I have so I have to go through and change them all. Fortunately, LastPass allows me to generate secure passwords which I can use to replace other passwords. The only real place where I would be concerned if they have access would by emails, but I have 2-factor authentication turned on there, and have had it turned on for many months, and they need my email address, password AND phone to get into my account. Even my backup codes are stored on a TrueCrypt volume stored on a LUKS partition on my laptop so they would need two passwords to get at those.
Mind you, it IS good that these forums were hacked, it’s given me a reason to go through my accounts and see which ones I still use and which ones I can delete.