Why the snoopers charter is ultimately unworkable.

July 13th, 2015

I’m going to suggest that there’s one very obvious idea that this whole ‘snoopers charter’ has overlooked. “How will this affect secure business communications”. Applications such as WhatsApp use an protocol known as ‘jabber’ (actually, that app is based on a platform called MongooseIM, which is a commercial product based on the open-source ejabberd). Other similar products would be HipChat and Slack. Telegram also, but that’s written by some Ukrainians who designed it to be uncrackable.

These aren’t just used for personal communications, but intra and inter-business communications for teams and enterprises. If it were pointed out to business leaders that for ‘the safety of the country’ that they could no longer use these systems securely, then there would be a lot more of an outcry – from those people the government is claiming to help and support to ‘get Britain back on its feet’.

Because of course, which business would want to use a communications system that had back-doors in it (or be prevented from using one that doesn’t), allowing their important communications to be monitored, either by the government, hackers or other peoples with nefarious purposes?.. If businesses cannot have secure communications to keep their industry secrets, or even the governments secrets itself with more reliance on its IT infrastructure, then how do they expect to keep all those ‘dodgy deals’ out of the public eye?.. After all, any ‘official back door’ is only an invitation or challenge for that to be hacked. ..but also if they were to claim that business were exempt, then all you would have to do would be to register yourself as a limited company and claim exemption for yourself… Theresa May, like the rest of this conservative government, is an idiot.

Finding a dodgy cable with Cisco Catalyst cable diagnostics

September 14th, 2011

We had a couple of servers that were themselves reporting as connected @ 1G/FD and also the same for the switchports they were connectd to. Yet iperf between the two boxes was pretty pants – about 34MBits/sec.
So I had a think and looked up how to test the cable lengths, which also showed me that the twisted pairs that make up an ethernet cable weren’t all working properly.
After the cable change, iperf between the same two machines showed  941 Mbits/sec, which is a vast improvement!

before cable change

core02#show cable-diagnostics tdr interface gigabitEthernet 1/0/11
TDR test last run on: August 12 19:50:34
Interface Speed Local pair Pair length Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi1/0/11 1000M Pair A 7 +/- 10 meters Pair B Normal
Pair B 0 +/- 10 meters Pair A Open
Pair C 1 +/- 10 meters Pair D Open
Pair D 0 +/- 10 meters Pair C Open
core02#show cable-diagnostics tdr interface gigabitEthernet 1/0/12
TDR test last run on: August 12 19:50:50
Interface Speed Local pair Pair length Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi1/0/12 1000M Pair A 1 +/- 10 meters Pair A Open
Pair B 7 +/- 10 meters Pair B Normal
Pair C 0 +/- 10 meters Pair C Open
Pair D 0 +/- 10 meters Pair D Short/Crosstalk

after cable change

core02#show cable-diagnostics tdr interface gigabitEthernet 1/0/11
TDR test last run on: August 12 20:10:32
Interface Speed Local pair Pair length Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi1/0/11 1000M Pair A 345 +/- 10 meters Pair B Normal
Pair B 345 +/- 10 meters Pair A Normal
Pair C 345 +/- 10 meters Pair D Normal
Pair D 345 +/- 10 meters Pair C Normal
core02#test cable-diagnostics tdr interface gigabitEthernet 1/0/12
TDR test started on interface Gi1/0/12
A TDR test can take a few seconds to run on an interface
Use 'show cable-diagnostics tdr' to read the TDR results.
core02#show cable-diagnostics tdr interface gigabitEthernet 1/0/12
TDR test last run on: August 12 20:11:20
Interface Speed Local pair Pair length Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi1/0/12 1000M Pair A 347 +/- 10 meters Pair B Normal
Pair B 347 +/- 10 meters Pair A Normal
Pair C 347 +/- 10 meters Pair D Normal
Pair D 347 +/- 10 meters Pair C Normal

You are totally surrounded! (finally)

May 11th, 2011

After many years of struggling with trying to get my AppleTV (mk1) to a state where the surround sound actually works, I finally have it solved.
The answer was, buy a new surround decoder box.
So now I have a nice, shiny Yahama A/V amp that decodes AC3 and DTS from the optical output of the ATV. Yet not just this wonderful feature is what makes this new amp great. What it has also done is about halve the amount of cabling in my bedroom A/V setup. Whereas previously where was a separate hifi amp, surround decoder and speaker set, A/V switchbox, 2 optical/co-ax transceivers + power supplies – now there is the one gox to rule them all as it were. So now I have (almost) all the previous kit plugged into the same device and can switch between it, with the bonus of 7.1(!) surround capabilities.
So I can switch between the ATV, Wii, Galaxy Tab multimedia dock, Sega Saturn, Humax PVR & Toshiba DVD player from one place, each with their own appropriate speaker settings auto-selected.
The only thing it doesn’t do is transcode between HDMI/Component/Composite video (each source-type only talks to its corresponding output), so I’ll still have to select differing inputs on the projector, although the HDMi out from the amp will go through a HDFury box to convert it to component as that’s what my projector supports… Which means I still need the vga sswitchbox that currenty links the sources to the projector, but that’s not too much of a bind really.
So now all the 720p movies that previously played without sound (like the Star Trek movie collection and also the as yet unwatched-in-one-sitting Bourne trilogy) all play in HD (thanks to XBMC now having GPU accelleration on the ATV) and with full surround sound.
SO thank you Yamaha for enabling a geek to lie in bed and be appropriately surrounded by really-big-screen entertainment!
Just another reason for not having to leave the house! ;)

rooted galaxy tab turns itself off when power button pressed

May 8th, 2011

This was driving me crazy!
Whenever I unplugged the device from usb and then pressed the power button to turn off the screen, the thing would turn itself off!

What I found was, me using SetCPU to clock the Tab down to 100MHz when screen off, was actually causing the problem.
so the solution was set the ‘screen off’ profile to a minimum of 200MHz with the ‘ondemand’ scaling setting and all is good… :)

http://forum.xda-developers.com/archive/index.php/t-879264.html

“Are you using setcpu? I had the same problem when trying to use conservitave or set the minimum to 100mhz. Currently I use ondemand with a minimum of 200mhz”

Galaxy Tab + dropbox + docs to go + keyboard = laptop replacement

May 4th, 2011

So recently I bought myself a Samsung Galaxy Tab as it’s “not an iPad”, holdable in one hand and runs the same version of Android as my HTC Desire Z.
Slightly more recently than that, I went a bit crazy and bought a number of the accessories, including the keyboard dock.
Today I discovered that the android version of dropbox does actually do two-way sync, instead of just allowing you to download files from your dropbox. Obvious when you think about it, but I hadn’t tried it before.
What I was trying to do was to find a portable way of editing my expenses spreadsheet and for this I turned to ‘Documents To Go’. Mostly I’ve tried to keep to the ‘free’ android software in the market, however with now having two devices linked to the same google account, it seemed like a reasonable option.
So now, I can download the file from my dropbox onto the Tab, edit in D2G and on saving it gets uploaded back to the dropbox.
So basically what this means is, that the tablet with the keyboard is now a pretty reasonable laptop replacement. If I want a bigger view, I can always plug into a TV with the video out cable. This could also make a very portable presentation systekm, with the powerpoint (spit spit) support of D2G.
Personally, what I would like to see is a combination of the desktop dock that has the HDMI output, with the keyboard, which would then put full-screen HD video, or at least the linkability of that, along with the typability (?) that the keyboard dock brings.
Therefore replacing a more bulky laptop with something you can take as a tablet when you need to move about (the office, primarily), then get back to your desk, slot in the tablet and write up your notes that could also be displayed on the bigger screeen connected via the HDMI port (maybe in the future!).

‘Tip of the day’, for most of this week

February 12th, 2011

I had this plan for doing a tip of the day, as I have this week come across some small but useful things to share with the world.

So here they are on the Saturday.

Tuesday:
In Windows 7, Right-click on a scrollbar and it’ll give you a context menu. Then you can press ‘T’ or ‘B’ for top or bottom of the page. Same trick works for horizontal scrollbars.

Wednesday:
Regex in a bash if statement:
if [ “$foo” =~ “one|two”; then
echo “foo is one or two”
fi

Thursday:
Apple, in their infinite wisdom, have yet again broken their implementation of samba, in Snow Leopard.
My home storage machine runs ubuntu and all my files are shared via samba.
To connect to these shares, in the ‘connect to’ box in SL, you need to format it:
smb://server:139/sharename
for it to actually connect.
Might update this post if I find a real solution…

Why XBMC rocks! (via postie)

December 29th, 2010

Bookmark this category
Xbmc is, at least as I’m concerned, the only media centre choice for
the discerning geek. Recently I upgraded my ‘cube’ machine and patched
appletv to the latest 10.0 version ‘Dharma’. The biggest difference,
that I had heard rumours of for a while, is that the OSX/appletv
verfsion how support hardware-accellerated x264 decoding! So now, all
those HD TV episodes and DVD/Blu-Ray rips I have (ok,up to 720p – I’m
not expecting miracles!) actually play on what is essentially a 1GHz
P3! There is still an issue where 5.1 ac3 audio doesn’t appear/downmix
but that could still be due to my surround decoder (still have tests to
run for that). Additionally, I recently purchsed a Hauppauge WinTV
Nova-TD-500 DVB card to turn my ‘cube’ machine into a freeview
recorder, along the lines of my existing Humax 9200 boxes. The bottle
that xbmc brings to this party is that of the ‘Video add-on’ known as
MythBox that is included in the default add-on repository. This extends
the familiar xbmc interface to be a front-end for that other stalwart
linux-based media centre, MythTV.

After the relatively painless configuration of MythTV (add card as
source, scan for channels, save channel names), the most techy change
was to allow access to the mythtv mysql database from the LAN ip and
local network for other distributed front-ends.

Then configure the locally installed MythBox to talk to the same
database and use the same recordings directory – neccessary for the two
componenets to talk to each other. it should be noted here that this
particular machine started as an xbmc-live 9.04 machine and has now
been upgraded to 10.10 Maverick and uses the xbmc ppa. i did try adding
the mythbuntu respositories to get the ‘latest’ mythtv packages due to
the ‘other’ frontend running on the appletv in my bedroom (it’s small
and quiet!) due to the protocol version mismatch introduced by the
mythtv project (jumping from 56 to 23056!) Which stopped the streaming
of live tv, but that caused a bit of a nightmare with mythtv expecting
a libmyth verfsion it didn’t have, so I reverted to the 10.10 versions
and all was good.

The other major issue was that the supplied Hauppauge remote was either
not getting its buton press notifications through to xbmc (so some,
like the rather important ‘OK’ button didn’t work) or xbmc was receving
the same input twice, resulting in equally unusable bouble-button
presses. After spending some hours (well, an afternoon) on this,
thinking that it was due to an interaction between the kernel ir
drivers being in twice + lirc input and working out how to disable the
ir remote as an xinput keyboard device all without a workable result.
What turned out to be the somewhat simpler solution was ‘sudo
dpkg-reconfigure lirc’, selecting thne wintv nova remote as a device.
After that, the xbmc debug log only reportged a single button-press per
button, meaning the other minor change was in
~xbmc/.xbmc/userdata/Lircmap.xml to tell that about ‘left’ being
‘ArrowLeft’ frrom the remote. After these happy events, all appeared to
be working so I took the machine back to its primary location in thne
living room for a demonstration. All the (configured) remote buttons
work, it’s possible to record programmes from the DTV tuner/s and
playback those recordings. Even watching live tv works! I was a little
puzzled for a few seconds as to why some random thing was coming on
when I tried to record an upcoming programme, then realised what I was
seeing was the interlude before the expected programme! Next step is to
get the extra remote buttons to do something useful, like have the
‘guide’ button bring up the mythbox tv guide… i haven’t yet had
another chance to see if the ‘watching live tv’ from another room
actually works, yet the fact I can schedule programmmes to record and
later play them back is probably good enough for the moment. I may even
end up ebay’ing at least one of my humax boxes! ;)

Legacy in 3D @ the Imax (via postie)

December 19th, 2010

Bookmark this category
I’d read a couple of reviews of the film, notably from Wired and Total Film, who saw this long-awaited sequel for the sequel to the original ground-breaking film as what it was – more and better, orat least as as good from what I remember having my father take me to see the original as a small boy of 10… I’ve read that some reviews criticised the acting or choice of actors, yet that I would answer “you’re missing the point” and “maybe the film isn’t for middle-aged types without imagination” (especially the guy from the London Evening Standard!).

For me, this was a film about possibilities and forgiveness. The possibilities about digital life from digital DNA becoming self-aware and how we can assume the worst about someone when we don’t have all the information about their situation, then come to understand why they did what they did when we do. The digital recreation of the younger Jeff Bridges is really impressive, looking and moving as an almost perfect simulacra, which obviously is, but certainly could be extended to reproduce famous actgors of bygone eras…

The graphics, from the Disney towers of light in the opening titles to the light-fighters (not just cycles or cars!) had me, at times, almost in tears of wonderment, although some of that was down to the 3D (amazing for the cityscapes) and some due to the space-sized (it’s big. Really, really big) screen of the Imax. We (father, g/f & I) were in row G, so that meant our entire field-of-view was the screen…

I should really have booked the tickets as soon as I got the email of when they were to be on sale, as the announcement of this film was indeed why I ‘just had to’ sign myself up for BFI membership to be able to get members advance tickets. So I would have preferred to have been a little further back, yet being where we were meant it was a ‘complete’ visual experience. What it does mean though, is that from noticing all the little details in the corners of the screen or backgrounds I will need to see it again, in at least full HD. Part of me was expecting all of the graphics to be full-depth 3D, given the generated environment, so there was a small disappointment there, but only a small one. I’ve decided I want one of the costumes, or at least a replica, so I’ll be saving the pennies for if Propworks get their act together for that!Oh, and probably also making sure I can be in shape for it! ;)

As for the story and the acting, for me it was certainly a father/son story about understanding responsibilities, although the g/f suggested religious overtones – the god-like powers of discovering/creating/overseeing what is essentially a new universe – which is certainly an equally valid viewpoint. For the acting, Jeff Bridges was great, as the Zen-hippy beardy father. To be honest, although I wouldn’t personally say there was any bad acting, as part of the point is that most of the film takes place in a ‘simulated world’! If you accept that, then you will more easily understand the film for what it is meant to be. geek bits: Inevitably the sequel is compared to the original an I would say it is not just a worthy successor, but expands and goes beyond the ground-breaking-ness of the original.

The subtle geek-details that not everyone will get (like having Sam type actual unix commands when logging into his fathers dusty terminal) made me smile with knowing that there were details for people like me. I know there are more, but that’s why I know I need to see the film again – the Imax can be a little overwhelming that way. Overall, it’s escapism that reminds me of being smaller/younger – the wonder of seeing the original on the equally massive screen of the original Leicester Square odeon (thanks Dad! x ).

If you don’t expect too much from the film, it will certainly deliver. If you have no expectations, then prepare to be blown away and left as speechless as I was (although some of that was indeed down to the Imax/3D combination). Awesome. +10.

Easyauto-mount vhd in Windows 7 (via postie)

November 11th, 2010

Bookmark this category
With my nice new shiny shiny laptop, because I wanted to multiboot it I’d partitoned the drive with the OSX disk utility. However, while Windows7 will indeed recognise a (secondary) GPT partitoned disk, it appears to require a hybrid MBR/GPT for installing, which means in Windows-land you’re pretty-much limited to the 4 primary partitions for OS installs – although I have read of a special version ofgptsync that allows you to select any 4 GPT partitions for your MBR. Previously on my Toshiba tablet I’d kept separate MBR extended partitons for my DJ/VJ data & extra storage. On the new Vaio the easiest partitoning scheme has turned out to be have a really big Windows partiton, with OSX before it and Linux after (partition #1 is the hidden EFI that disk utility creates). The first thing I tried was the old DOS command of subst, to mount a directory as a drive, to keep the disk tidy and also ease the transfer of the video/audio apps which expect the VJ files on D:, music on E: & storage on F:. However this could result in a big live recording filling up the system drive so I decided to use vhd’s to keep my VJ, DJ & other data contiguous and separate. Then I discovered that creating the virtual disks doesn’t keep the mounting of them persistent over reboots. A quick google found a couple of methods for auto-mounting them, but these required the use of powershell scripts and an old-school batch file to call that. But there should be a way that doesn’t require writing anytghing! (Is what I thought to myself) and there is. The ‘gizmo’ utilities allow the mounting of a wide range of virtual disks, including our friend the Microsoft VHD file. What is more, there’s a checkbox on the mounting dialog that says ‘remount at boot’. Job done! ;) It does indeed do what it says, although it does take a few more seconds aftger booting for the Virtual drives to show up.

Why I was right all along… (via postie)

June 21st, 2010

Bookmark this category
Over the last couple of weeks I’ve had a couple of job-search results that have looked rather familiar. These being with a particular spec from a particular agency who were the only agency to deal with this particular company, which is the same as one I interviewed for a few weeks ago, yet didn’t get due to a disagreement in methodology between myself and their Polish ‘Technical Lead’. Now, over the last weekend, I was having a chat with a couple of the tech guys I know who just happen to work at the hosting centre (‘The Bunker’) at which this company that I’ve ended up not working for have their servers hosted. Now with these guys being proper techies who know what they’re doing, they agreed with my points of why the Eastern european ‘Technical Lead’ was in fact mistaken, with one of them mentioning that the TL is somewhat difficult to deal with and the other suggesting that while our friend from the former eastern bloc may have adequate unix skillz, their methods (like one changing firewall policies without notification because he didn’t like them) may leave something to be desired. So, what were these disagreements? and why were they mistaken? * “CentOS is the only operating system of choice because it has a 6-year support life”. You see, I’d been asked which linux distro I preferred and I said I preferred (personally) Debian, for the way it is engineered. For me, the RedHat-based distros, although they no longer suffer from ‘dependency hell’ with the addition of yum and their own repositories, still feel somewhat clunky to me. Also that RedHat encourage the use of their own command-line tools for systems management, whereas the Debian-based distros encourage you to edit config files yourself, thereby giving you the experience of how components are configured and where you should check for errors if some service or component is misbehaving (as opposed to turning something on or off with a curses-based interface). Also, given that the service life on most new hardware is 3 years, does having an operating system that may be supported for twice that really make that much difference? Ubuntu has 4 years LTS which for me is fine enough. Now, although personally I may prefer Debian/Ubuntu, professionally I would have to say I prefer Solaris, mainly for ZFS and Zones/Containers. I like that it’s solid, proven and well supported. Which brings me on to point 2.

* “Every system should always have the latest updates”. Well yes, in theory that may sound like a perfectly fine statement, yet in reality, for servers you may not want do that. So during the second hour-long technical test for this job-that-I-didn’t-get I wanted to look up an ssh config parameter, so I logged into a machine that I have which happens to still be running Debian Sarge. Now, some of you may recoil in horror at this, with it being 2 iterations behind current and being relegated to the Debian archive. Yet some of you will understand how sometimes there are situations that preclude the updating of what may be a legacy system due to any number of factors including needing particular versions of libraries, usage of the system which makes it difficult to find a time to upgrade it (say if 4 different teams are required to agree on a time but never can), or needing to plan a migration strategy as upgrading remotely is too much of a risk). In fact the Sun IPX’s that I rescued from my last employer (wanted them for a project), were still running Solaris 2.4 at time of their redundancy, which was only around Spring 2009. So the age of a system is not necessarily related to its effectiveness or validity of usage. So from my conversation of last weekend (as long as I remembered this right!) the guys at ‘The Bunker’ have a policy where you only update a system if it has been shown to resolve a known vulnerability and gets signed off (which I’m taking to mean had already been tested elsewhere!). Which for me, being someone who prefers a system to be as stable as possible, makes perfect sense – “If it’s a live system and ain’t broke, then don’t risk f*cking it up!”, unless you can justify the update. Which leaves me with one other point:

* “Every machine should have its own firewall” Now in isolation, this may seem a perfectly acceptable statement. Which it can be, as long as you have only (eg) 6 machines. Once you get to 60 or 600, then this idea collapses, as do you really want to have to do that much management of a system?.. For me, the correct statement would be “Any network or sufficient size should have its firewalls (as in a failover pair) at the edge of the network where they should be”, accompanied by “Treat any small network as if it were larger, that way you’ve already prepared for it to be scalable”. So what this shows me, is that our ‘friendly’ Technical Lead has obviously never managed a larger system. Otherwise he would think differently.

What I find most depressing is that these people who might be very good at what they do (within a very restrictive set of parameters) get to be in positions of influence because they are able to shout loudly that you should listen to them because they’re the best ever whatever and they believe themselves to be right as they’ve never been wrong. Yet those above are happy to leave them to it because of that self-belief and ability to justify their decisions, however ‘wrong’ they may be in the bigger picture. For someone like me who has, well, more years of experience than I would sometimes like to be reminded of, I know I have the knowledge and wisdom to know not just when to use a particular operating system of service for a particular application, but when NOT to. Which is experience that our friends from the east (be that Europe or especially India) generally don’t have… So while I’m sitting there in some recent interviews being told some complete nonsense, it is somewhat difficult to not speak up and tell this other person, “Hold on, if you stop and look at whatever-this-is from a slightly wider perspective (that you’d only get from more experience), you’d understand how you are in fact talking complete bollocks”, when this other person is the one making the decision…