Blender Fox


Upgrading Ubuntu (fun! ¬_¬)

#

Spent several hours trying to upgrade my Ubuntu installation from 15 up to the latest 17. The upgrade didn’t fail, but I did see a few error messages, and now I have applications failing to start for various reasons, including the settings applet; and when I install or use my nvidia drivers, ubuntu doesn’t start up properly until I do

[code]

apt-get purge nvidia*

[/code]

But removing all the nvidia stuff causes it to fallback to nouveau which for the most part works, but not exactly good for any linux gaming.

Looks like it’s going to be a full-reinstall job to make sure everything is clean :(

Tunnelling to Kubernetes Nodes & Pods via a Bastion

#

A quick note to remind myself (and other people) how to tunnel to a node (or pod) in Kubernetes via the bastion server

[code lang=text] rm ~/.ssh/known_hosts #Needed if you keep scaling the bastion up/down

BASTION=bastion.{cluster-domain} DEST=$1

ssh -o StrictHostKeyChecking=no -o ProxyCommand=‘ssh -o StrictHostKeyChecking=no -W %h:%p admin@bastion.{cluster-domain}’ admin@$DEST [/code]

Run like this:

[code lang=text] bash ./tunnelK8s.sh NODE_IP [/code]

Example:

[code lang=text] bash ./tunnelK8s.sh 10.10.10.100 #Assuming 10.10.10.100 is the node you want to connect to. [/code]

You can extend this by using this to ssh into a pod, assuming the pod has an SSH server on it.

[code lang=text] BASTION=bastion.${cluster domain name} NODE=$1 NODEPORT=$2 PODUSER=$3

ssh -o ProxyCommand=“ssh -W %h:%p admin@$BASTION” admin@$NODE ssh -tto StrictHostKeyChecking=no $PODUSER@localhost -p $NODEPORT [/code]

So if you have service listening on port 32000 on node 10.10.10.100 that expects a login user of “poduser”, you would do this:

[code lang=text] bash ./tunnelPod.sh 10.10.10.100 32000 poduser [/code]

If you have to pass a password you can install sshpass on the node, then use that (be aware of security risk though - this is not an ideal solution)

[code lang=text] ssh -o ProxyCommand=“ssh -W %h:%p admin@$BASTION” admin@$NODE sshpass -p ${password} ssh -tto StrictHostKeyChecking=no $PODUSER@localhost -p $NODEPORT [/code]

Caveat though – you will have to make sure that your node security group allows your bastion security group to talk to the nodes on the additional ports. By default, the only port that the bastions are able to talk to the node security groups on is SSH (22) only.

How to using S3 as a RWM/NFS-like store in Kubernetes

#

Let’s assume you have an application that runs happily on its own and is stateless. No problem. You deploy it onto Kubernetes and it works fine. You kill the pod and it respins, happily continuing where it left off.

Let’s add three replicas to the group. That also is fine, since its stateless.

Let’s now change that so that the application is now stateful and requires storage of where it is in between runs. So you pre-provision a disk using EBS and hook that up into the pods, and convert the deployment to a stateful set. Great, it still works fine. All three will pick up where they left off.

Now, what if we wanted to share the same state between the replicas?

For example, what if these three replicas were frontend boxes to a website? Having three different disks is a bad idea unless you can guarantee they will all have the same content. Even if you can, there’s guaranteed to be a case where one or more of the boxes will be either behind or ahead of the other boxes, and consequently have a case where one or more of the boxes will serve the wrong version of content.

There are several options for shared storage, NFS is the most logical but requires you to pre-provision a disk that will be used and also to either have an NFS server outside the cluster or create an NFS pod within the cluster. Also, you will likely over-provision your disk here (100GB when you only need 20GB for example)

Another alternative is EFS, which is Amazon’s NFS storage, where you mount an NFS and only pay for the amount of storage you use. However, even when creating a filesystem in a public subnet, you get a private IP which is useless if you are not DirectConnected into the VPC.

Another option is S3, but how do you use that short of using “s3 sync” repeatedly?

One answer is through the use of s3fs and sshfs

We use s3fs to mount the bucket into a pod (or pods), then we can use those mounts via sshfs as an NFS-like configuration.

The downside to this setup is the fact it will be slower than locally mounted disks.

So here’s the yaml for the s3fs pods (change values within {…} where applicable) – details at Docker Hub here: https://hub.docker.com/r/blenderfox/s3fs/

(and yes, I could convert the environment variables into secrets and reference those, and I might do a follow up article for that)

[code]

kind: Deployment apiVersion: extensions/v1beta1 metadata: name: s3fs namespace: default labels: k8s-app: s3fs annotations: {} spec: replicas: 1 selector: matchLabels: k8s-app: s3fs template: metadata: name: s3fs labels: k8s-app: s3fs spec: containers: - name: s3fs image: blenderfox/s3fs env: - name: S3_BUCKET value: {…} - name: S3_REGION value: {…} - name: AWSACCESSKEYID value: {…} - name: AWSSECRETACCESSKEY value: {…} - name: REMOTEKEY value: {…} - name: BUCKETUSERPASSWORD value: {…} resources: {} imagePullPolicy: Always securityContext: privileged: true restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600

kind: Service apiVersion: v1 metadata: name: s3-service annotations: external-dns.alpha.kubernetes.io/hostname: {hostnamehere} service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: “3600” labels: name: s3-service spec: ports:

[/code]

This will create a service and a pod

If you have external DNS enabled, the hostname will be added to Route 53.

SSH into the service and verify you can access the bucket mount

[code] ssh bucketuser@dns-name ls -l /mnt/bucket/ [/code]

(This should give you the listing of the bucket and also should have user:group set on the directory as “bucketuser”)

You should also be able to rsync into the bucket using this

[code] rsync -rvhP /source/path bucketuser@dns-name:/mnt/bucket/ [/code]

Or sshfs using a similar method

[code]

sshfs bucketuser@dns-name:/mnt/bucket/ /path/to/local/mountpoint

[/code]

Edit the connection timeout annotation if needed

Now, if you set up a pod that has three replicas and all three sshfs to the same service, you essentially have an NFS-like storage.

 

Single Point of Failure: The LKML [The Register]

#

You are always lectured about making backups of your systems, even more so when you are running archives from a very active mailing list. ^_^

www.theregister.co.uk/2018/01/1…

Guide to creating a Kubernetes Cluster in existing subnets & VPC on AWS with kops

#

This article is a guide on how to setup a Kubernetes cluster in AWS using kops and plugging it into your own subnets and VPC. We attempt to minimise the external IPs used in this method.

Export your AWS API keys into environment variables

[code lang=text] export AWS_ACCESS_KEY_ID=‘YOUR_KEY’ export AWS_SECRET_ACCESS_KEY=‘YOUR_ACCESS_KEY’ export CLUSTER_NAME=“my-cluster-name” export VPC=“vpc-xxxxxx” export K8SSTATE=“s3-k8sstate”</pre> [/code]

Create the cluster (you can change some of these switches to match your requirements. I would suggest only using one worker node and one master node to begin with and then increase them once you have confirmed the config is good. The more workers and master nodes you have, the longer it will take to run a rolling-update.

kops create cluster --cloud aws --name $CLUSTER_NAME --state s3://$K8SSTATE --node-count 1 --zones eu-west-1a,eu-west-1b,eu-west-1c --node-size t2.micro --master-size t2.micro --master-zoneseu-west-1a,eu-west-1b,eu-west-1c --ssh-public-key ~/.ssh/id_rsa.pub --topology=private --networking=weave --associate-public-ip=false --vpc $VPC

Important note: There must be an ODD number of master zones. If you tell kops to use an even number zones for master, it will complain.

If you want to use additional security groups, don’t add them yet – add them after you have confirmed the cluster is working.

Internal IPs: You must have a VPN connection into your VPC or you will not be able to ssh into the instances. The alternative is to use the bastion functionality using the –bastion flag with the create command. Then doing this:

ssh -i ~/.ssh/id_rsa -o ProxyCommand='ssh -W %h:%p admin@bastion.$CLUSTER_NAME' admin@INTERNAL_MASTER_IP

However, if you do this method, you MUST then use public IP addressing on the api load balancer, as you will not be able to do kops validate otherwise.

Edit the cluster

kops edit cluster $CLUSTER_NAME --state=s3://$K8SSTATE

Make the following changes:

If you have a VPN connection into the VPC, change spec.api.loadBalancer.type to “Internal”, otherwise, leave it as “Public” Change spec.subnets to match your private subnets. To use existing private subnets, they should also include the id of the subnet and match the CIDR range, e.g.:

[code lang=text] subnets:

The utility subnet is where the Bastion hosts will be placed, and these should be in a public subnet, since they will be the inbound route into the cluster from the internet.

If you need to change or add specific IAM permissions, add them under spec.additionalPolicies like this to add additional policies to the node IAM policy (apologies about the formatting. WordPress is doing something weird to it.)

[code lang=text] additionalPolicies:   node: |     [       {          “Effect”: “Allow”,         “Action”: [“dynamodb:"],         “Resource”: [""]       },        {          “Effect”: “Allow”,          “Action”: [“es:"],              “Resource”: [""]            }        ] [/code]

Edit the bastion, nodes, and master configs (MASTER_REGION is the zone where you placed the master. If you are running a multi-region master config, you’ll have to do this for each region)

kops edit ig master-{MASTER_REGION} --name=$CLUSTER_NAME --state s3://$K8SSTATE

kops edit ig nodes --name=$CLUSTER_NAME --state s3://$K8SSTATE
kops edit ig bastions --name=$CLUSTER_NAME --state s3://$K8SSTATE

Check and make any updates.

If you want a mixture of instance types (e.g. t2.mediums and r3.larges), you’ll need to separate these using new instance groups ($SUBNETS is the subnets where you want the nodes to appear – for example, you can provide a list “eu-west-2a,eu-west-2b)

kops create ig anothernodegroup --state s3://$K8SSTATE --subnets $SUBNETS

You can later delete this with

kops delete ig anothernodegroup --state s3://$K8SSTATE

If you want to use spot prices, add this under the spec section (x.xx is the price you want to bid):

maxPrice: "x.xx"

Check the instance size and count if you want to change them (I would recommend not changing the node count just yet)

If you want to add tags to the instances (for example for billing), add something like this to the spec section:

[code lang=text] cloudLabels: Billing: product-team</pre> [/code]

If you want to run some script(s) at node startup (cloud-init), add them to spec.additionalUserData:

[code lang=text] spec: additionalUserData:

Apply the update:

kops update cluster $CLUSTER_NAME --state s3://$K8SSTATE --yes

Wait for DNS to propagate and then validate

kops validate cluster --state s3://$K8SSTATE

Once the cluster returns ready, apply the Kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

Access the dashboard via

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

also try:

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

If the first doesn’t work

(ignore the cert error)

Username is “admin” and the password is found from your local ~/.kube/config

Add the External DNS update to allow you to give friendly names to your externally-exposed services rather than the horrible elb names.

See here: https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md

(You can apply the yaml directly onto the cluster via the dashboard. Make sure you change the filter to match your domain or subdomain. )

Note that if you use this, you’ll need to change the node IAM policy on the cluster config as the default IAM policy won’t allow the External DNS container to modify Route 53 entries, and also annotate (use kubectl annotate $service_name key:value) your service with text such as:

external-dns.alpha.kubernetes.io/hostname: $SERVICE_NAME.$CLUSTERNAME

And also you might need this annotation, to make the ELB internal rather than public - otherwise Kubernetes will complain “Error creating load balancer (will retry): Failed to ensure load balancer for service namespace/service: could not find any suitable subnets for creating the ELB”

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

(Optional) Add the Cockpit pod to your cluster as described here

http://cockpit-project.org/guide/133/feature-kubernetes.html

It will allow you to visually see a topology of your cluster at a cluster and also provides some management features too. For example, here’s my cluster. It contains 5 nodes (1 master, 4 workers and is running 4 services (Kubernetes, external-dns, cockpit, and dashboard). Cockpit creates a replication controller so it knows about the changes.

chrome_2018-01-14_15-44-00

Add any additional security groups by adding this under the spec section of the node/master/bastions config, then do a rolling-update (you might need to use the –force switch), do this as soon as you can after creating and verifying the cluster updates work.

[code lang=text] additionalSecurityGroups:

If the cluster breaks after this (i.e. the nodes haven’t shown up on the master), reboot the server (don’t terminate, use the reboot option from the AWS console), and see if that helps. If it still doesn’t show up, there’s something wrong with the security groups attached – i.e. they’re conflicting somehow with the Kubernetes access. Remove those groups and then do another rolling-update but use both the –force and –cloudonly switches to force a “dirty” respin.

If the cluster comes up good, then you can change the node counts on the configs and apply the update.

Note that if you change the node count and then apply the update, the cluster attempts to make the update without rolling-update. For example, if you change the node count from 1 to 3, the cluster attempts to bring up the 2 additional nodes.

Other things you can look at:

Kompose - which converts a docker-compose configuration into Kubernetes resources

Finally, have fun!

Massive Intel Chip Security Flaw Threatens Computers

#

An Intel flaw that has been sitting hidden for a decade has finally surfaced.

Being on the chip rather than the OS, it doesn’t affect a single OS – with Linux, Windows and MacOS being mentioned in this article.

www.linuxinsider.com/story/850…

Tinkering with Kubernetes and AWS

#

 

This article just goes through my tinkering with Kubernetes on AWS.

Create a new S3 bucket to store the state of your Kubernetes clusters

aws s3 mb s3://k8sstate --region eu-west-2

Verify

aws s3 ls

Create a Route 53 hosted zone. I’m creating k8stest.blenderfox.uk

aws route53 create-hosted-zone --name k8stest.blenderfox.uk \
--caller-reference $(uuidgen)

dig the nameservers for the hosted zone you created

dig NS k8stest.blenderfox.uk

If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:

;; QUESTION SECTION:
;k8stest.blenderfox.uk.     IN  NS

;; ANSWER SECTION:
k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org.
k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk.
k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com.
k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.

 

Export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles

export AWS_ACCESS_KEY_ID='your key here'
export AWS_SECRET_ACCESS_KEY='your secret access key here'

You can also add it to a bash script and source it.

Create the cluster using kops. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here

kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \
--state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \
--node-size m4.large --master-size m4.large \
--master-zones eu-west-2a \
--ssh-public-key ~/.ssh/id_rsa.pub \
--master-volume-size 50 \
--node-volume-size 50 \
--topology private

You can also add the –kubernetes-version switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at

https://github.com/kubernetes/kops/blob/master/channels/stable

TL;DL: Bands are:

Each with their own Debian image.

 

Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud

kops update cluster cluster.k8stest.blenderfox.uk --yes \
--state s3://k8sstate

While the cluster starts up, all the new records will be set up with placeholder IPs.

Selection_004

NOTE: Kubernetes needs an externally resolvable DNS name. Basically, you need to be able to create a hosted zone on a domain you control. You can’t use Kops on a domain you can’t control, even if you hack the resolver config.

The cluster can take a while to come up. Use

kops validate cluster –state s3://k8sstate

To check the cluster state.

When ready, you’ll see something like this:

Using cluster from kubectl context: cluster.k8stest.blenderfox.co.uk

Validating cluster cluster.k8stest.blenderfox.co.uk

INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-west-2a       Master  m4.large        1       1       eu-west-2a
nodes                   Node    m4.large        3       3       eu-west-2a,eu-west-2b

NODE STATUS
NAME                                            ROLE    READY
ip-172-20-35-51.eu-west-2.compute.internal      master  True
ip-172-20-49-10.eu-west-2.compute.internal  node    True
ip-172-20-72-100.eu-west-2.compute.internal     node    True
ip-172-20-91-236.eu-west-2.compute.internal     node    True

Your cluster cluster.k8stest.blenderfox.co.uk is ready

Now you can start interacting with the cluster. First thing is to deploy the Kubernetes dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
 serviceaccount "kubernetes-dashboard" created
 role "kubernetes-dashboard-minimal" created
 rolebinding "kubernetes-dashboard-minimal" created
 deployment "kubernetes-dashboard" created
 service "kubernetes-dashboard" created

Now setup a proxy to the api

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Next, access

http://localhost:8001/ui

To get the dashboard

Now let’s create a job to deploy on to the cluster.

The Linux commands you should NEVER use (Hewlett Packard Enterprise)

#

Source: https://www.hpe.com/us/en/insights/articles/the-linux-commands-you-should-never-use-1712.html

The classic rm -rf / is there, along with accidental dd‘ing or mkfs‘ing the wrong disk (I’ve done that before), but the lesser known fork bombs and moving to /dev/null are in there (I often redirect output to /dev/null, but not moved files into there. That’s an interesting way of getting rid of files.

Goodbye Apple, goodbye Microsoft... hello Linux

#

Not often I quote from a publication from Ireland, but this was quite an intriguing read. Someone who went from Windows to Mac to Linux (Mint)

Linux is everywhere – and will free your computer from corporate clutches

It was 2002, I was up against a deadline and a bullying software bubble popped up in Windows every few minutes. Unless I paid to upgrade my virus scanner – now! – terrible things would happen.

We’ve all had that right?

In a moment of clarity I realised that the virus scanner – and its developer’s aggressive business model – was more of a pest than any virus I’d encountered. Microsoft’s operating system was full of this kind of nonsense, so, ignoring snorts of derision from tech friends, I switched to the Apple universe.

It was a great choice: a system that just worked, designed by a team that clearly put a lot of thought into stability and usability. Eventually the iPhone came along, and I was sucked in farther, marvelling at the simple elegance of life on Planet Apple and giving little thought to the consequences.

Then the dream developed cracks. My MacBook is 10 years old and technically fine, particularly since I replaced my knackered old hard drive with a fast new solid-state drive. So why the hourly demands to update my Apple operating system, an insistence that reminded of the Windows virus scanner of old?

Apple is no different to Microsoft it seems.

I don’t want to upgrade. My machine isn’t up to it, and I’m just fine as I am. But, like Microsoft, Apple has ways of making you upgrade. Why? Because, as a listed company, it has quarterly sales targets to meet. And users of older MacBooks like me are fair game.

I looked at the price of a replacement MacBook but laughed at the idea of a midrange laptop giving me small change from €1,200. Two years after I de-Googled my life(iti.ms/2ASlrdY) I began my Apple prison break.

He eventually went for Linux Mint, which for a casual user is fine. I use Fedora and Ubuntu (and a really old version of Ubuntu since my workplace VPN doesn’t seem to work properly with anything above Ubuntu 14 - their way of forcing me onto either a Windows or Mac machine)

Source: www.irishtimes.com/business/…

TMOUT - Auto Logout Linux Shell When There Isn't Any Activity

#

Something new I learned today – doing

export TMOUT=120

Will auto logout your current shell/login session after that many seconds.

Very useful if you hook this into the root account’s profile or as a default to all users so people can’t leave terminals open

Source: TMOUT - Auto Logout Linux Shell When There Isn’t Any Activity

How to Install Multiple Linux Distributions on One USB

#

As someone who has tinkered with multiple distributions, this will be a great way to try out multiples

This tutorial shows you how to install multiple Linux distributions on one USB. This way, you can enjoy more than one live Linux distros on a single USB key.

Source: How to Install Multiple Linux Distributions on One USB

Linus Torvalds Invites Attackers to Join the Ke... » Linux Magazine

#
Torvalds is not a huge fan of the ‘security community’ as he doesn’t see it as black and white. He maintains that bugs are part of the software development process and they cannot be avoided, no matter how hard you try. “constant absolute security does not exist, even if we do a perfect job,” said Torvalds in a conversation with Jim Zemlin, the executive director of the Linux Foundation.

“As a technical person, I’m always very impressed by some of the people who are attacking our code,” Torvalds said. “I get the feeling that these smart people are doing really bad things that I wish they were on our side because they are so smart and they could help us.”

Source: Linus Torvalds Invites Attackers to Join the Ke… » Linux Magazine

Android 8.0 Oreo, thoroughly reviewed | Ars Technica

#

Looking forward to when LineageOS can upgrade to Oreo. There’s a lot of new features that may make life a lot easier generally. Take a look in the article for details

We take a 20,000 word deep-dive on Android's "foundational" upgrades.

Source: Android 8.0 Oreo, thoroughly reviewed | Ars Technica

Update &amp; Build Prep – Lineage OS – Lineage OS Android Distribution

#

Cyanogen’s fork is beginning to take shape. Currently my devices aren’t showing but fingers crossed it will.

Few points worth noting from their site:

However, also notable and I’m really happy about this:

Regarding installation, we recommend that users wipe when switching to LineageOS, and reinstall their gapps. However, we recognize that this can be time consuming, so we are offering an EXPERIMENTAL (read as, if it fails, you’ll have to wipe anyways) solution.
  • Alongside the ‘weekly’ release for your supported device, we’ll provide an EXPERIMENTAL data migration build.
  • This build will allow you to ‘upgrade’ from CM to the signed LineageOS weekly
  • This build may wipe permissions (you’ll have to re-allow app permissions), but should retain all user data
  • This build will be watermarked with an ugly banner to ensure that you don’t permanently run this EXPERIMENTAL release, and upgrade to a normal weekly after.
  • The process for this installation will be as follows:
    • Install EXPERIMENTAL migration build on top of cm-13.0 or cm-14.1 build (don’t try to install LineageOS 13.0 on top of CM 14.1, that will not work).
    • Reboot
    • Install LineageOS weekly build
    • Reboot
    • Re-setup your application permissions
Given the EXPERIMENTAL nature of this process, we are going to remove this option in two months time.

All systems operational

Source: Update & Build Prep – Lineage OS – Lineage OS Android Distribution

Why a MacOS user switched to Ubuntu Linux | InfoWorld

#

 

Also in today’s open source roundup: Ultimate Edition 5.0 Gamers distro released, and Android versus iOS for business

Source: Why a MacOS user switched to Ubuntu Linux | InfoWorld

OpenSSL after Heartbleed | Linux.com | The source for Linux information

#

Despite being a library that most people outside of the technology industry have never heard of, the Heartbleed bug in OpenSSL caught the attention of the mainstream press when it was uncovered in April 2014 because so many websites were vulnerable to theft of sensitive server and user data. At LinuxCon Europe, Rich Salz and Tim Hudson from the OpenSSL team did a deep dive into what happened with Heartbleed and the steps the OpenSSL team are taking to improve the project.

Source: OpenSSL after Heartbleed | Linux.com | The source for Linux information

Some Myths About Linux That Cause New Users To Run Away From Linux - LinuxAndUbuntu

#

An attempt to bust some of the myths that surround Linux. Not a lot of them, but still some of them - some of which I see a lot in Windows communities. And the old classic “Linux is CLI only” (facepalm)

Source: Some Myths About Linux That Cause New Users To Run Away From Linux - LinuxAndUbuntu

iTWire - No highs, no lows: Linus Torvalds on 25 years of Linux

#

Linus has his moments. He’s well-known for having a short-temper, lashing out at a contributor, but he is also known for creating Linux which a significant number of devices these days use, in some form or another. I work with it on a daily basis both at work and at home. Without this guy, I wouldn’t be where I am now. Well, possibly I would still be here, but dealing with cough Windows cough servers instead….

On August 25, Linux creator Linus Torvalds will be in a plane somewhere between Canada and the United States as his handiwork, which has completely changed the world of computing, marks its 25th birthday.

Source: iTWire - No highs, no lows: Linus Torvalds on 25 years of Linux

Also, here’s a TED talk he did

[ted id=2464 lang=en]

 

5 reasons to ditch Windows for Linux | TheINQUIRER

#

Ready to give up on Windows? You’re probably not alone

Source: 5 reasons to ditch Windows for Linux | TheINQUIRER

How To Install Steam on Ubuntu 16.04 LTS - OMG! Ubuntu!

#

Steam was one of the many things that broke with Ubuntu 16.04 because of numerous changes in package names and dependencies. Fortunately, here’s a guide to fix that. Now, back to my Dungeon Defenders :D

Source: How To Install Steam on Ubuntu 16.04 LTS - OMG! Ubuntu!

 

Kali Linux Pentesting Distribution -- Now Runnable in Browser

#
Everyone loves hearing about pentesting and ethical hacking distros these days, and it looks like it is even becoming a trend among aspiring security professionals.

Therefore, today we have some good news for those who want to try one of the best penetration testing and security auditing operating systems based on the Linux kernel, Kali Linux, the successor of the popular BackTrack, and don’t have the resources to run the Live CD or install the OS on their computers.

Network security specialist Jerry Gamblin has created a project called KaliBrowser, which, if you haven’t already guessed, it allows you to run the famous Kali Linux operating system on a web browser, using the Kali Linux Docker image, Openbox window manager, and NoVNC HTML5-based VNC client.

Source: news.softpedia.com/news/you-…

Tinkering

#

Looks like my weekend is going to be filled with tinkering again. ^_^;

I need to reinstall windows on my laptop as I think there must be some graphics conflict somewhere and it’s lagging when it gets taxed (didn’t normally). Most commonly, it happens when I’m playing Final Fantasy XIV, but has lagged a bit on Alice: Madness Returns and Hyperdimension Neptunia U: Action Unleashed. I figured it might be my connection, since FFXIV is an MMORPG, so I switched from my WiFi to my 4G connection via tethering and it still lags. I then switched from DirectX 9 to DirectX 11, amd still nothing. I even downgraded my Nvidia driver to a REALLY old version (since Nvidia ran into a huge bug with one of their drivers, if you recall), so I’m planning to run my Clonezilla backup tonight (which should take a few hours since I’m also backing up my Ubuntu install), and then run my Windows install then then boot-repair to get grub back (凸(>皿<)凸 Microsoft)

And then, I have to go through the process of installing drivers and updating Windows, though I will probably skip updating Windows since I only use it as a gaming environment. And downloading my Steam games again. Including the Heavensward expansion, Final Fantasy XIV is probably about 20-30GB. With the spikes and dips in download speed on my 4G, it’s going to take about 3 hours.

Ubuntu Founder Pledges No Back Doors in Linux

#

Whilst I totally respect Mark for coming out and saying this, that’s not to say that in future, Canonical could be bullied into implementing a back door, or Ubuntu cracked by some untoward government agency….

VIDEO: Mark Shuttleworth, founder of Canonical and Ubuntu, discusses what might be coming in Ubuntu 16.10 later this year and why security is something he will never compromise.

Source: Ubuntu Founder Pledges No Back Doors in Linux

 

Magic happens with the Ubuntu tablet - TechRepublic

#

Jack Wallen reviews the bq Aquaris M10 tablet and he’s impressed. If you’ve been on the fence about Ubuntu Touch, this might just assuage those unpleasant feelings.

Source: Magic happens with the Ubuntu tablet - TechRepublic

Canonical tried to do this with their last attempt to crowdsource their Ubuntu phone, but it didn’t make enough money. This one looks pretty good too. Now I wonder if I could run Android apps on there too. :D

Linux Gaming Setup Part 2: Software configs, Nvidia binary driver, bumblebee, steam and playonlinux howto – Out Here In The Field : Persistence

#

I decided to make a second part of my Linux gaming setup post, as I feel that the first one is more like a list of stuff that are on top my desk. Anyway, once you’re done with hardware, let&#…

Source: Linux Gaming Setup Part 2: Software configs, Nvidia binary driver, bumblebee, steam and playonlinux howto – Out Here In The Field : Persistence