Showing posts with label Windows. Show all posts
Showing posts with label Windows. Show all posts

Thursday, 2 October 2014

Time Flies

I've just had a bit of a surprise when I realised how long I've been using Linux as my main operating system.

"How long?" I hear you ask...

Thirteen years.

Yes folks, thirteen years!

Around the time that Windows XP was released, I'd switched to Linux as a "can I use it full time" experiment, and never went back.

These days I'm not even dual booting, Ubuntu is the only OS on my computer, and it does everything I need at present - (I'm not a huge gamer).  I'll admit, I have got a Windows XP virtual machine, but in all honesty it doesn't get used and will probably be deleted if I start running low on disk space.

In that time I'd started off using Mandrake Linux, switched to Slackware (which I used for a good number of years) before switching to Ubuntu.  I never found any good reason to switch back to Slackware - so I've been using that ever since.

It's true what they say, "Time flies when you're having fun..."

Saturday, 11 February 2012

Slow Web Browsing in Ubuntu

A couple of years ago I bought a Belkin wireless router. No big surprises there. My main PC connected to it via a network cable, my kid's PC using wireless.

In Windows it works great. In Linux it didn't. There was a noticable delay (around four seconds) from entering a web address to the page starting to load. Every time.

In the end I switched to a different router which nicely avoided the problem, and the old router took up it's new position as an "emergency backup."

Now the new router has finally failed, and so I've switched back to the "old" one, and yes, it is still slow to browse in Ubuntu. The funny thing is that once the page starts loading it loads quickly, but there is that long delay before it starts.

So now I'm stuck with it, and with the benefit of a couple of years more experience under my belt it is time to find out what's going wrong.

Let's turn to our good friend dig and see whats going on:

dig www.google.com

This gives the following response:

;; reply from unexpected source: 8.8.8.8#53, expected 192.168.1.1#53

; <<>> DiG 9.7.3 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59114
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.google.com. IN A

;; ANSWER SECTION:
www.google.com. 3 IN CNAME www.l.google.com.
www.l.google.com. 3 IN A 209.85.229.99
www.l.google.com. 3 IN A 209.85.229.103
www.l.google.com. 3 IN A 209.85.229.104
www.l.google.com. 3 IN A 209.85.229.147
www.l.google.com. 3 IN A 209.85.229.105

;; Query time: 36 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sat Feb 11 17:45:21 2012
;; MSG SIZE rcvd: 132

The first line gives a very good clue as to what the problem may be.

reply from unexpected source: 8.8.8.8#53, expected 192.168.1.1#53

The unexpected source is what I would have thought was my primary DNS server (and indeed the router's configuration confirms this, with 8.8.4.4. as the secondary). The "expected" IP address is the router itself. This explains the slow browsing - as there there will be a delay while the DNS request is made to the router (which doesn't respond) before moving on to the true DNS server (which does).

Running the connection information tool confirms this. The router is being set up as the primary DNS server, with 8.8.8.8 as the secondary and 8.8.4.4 as tertiary.

The fix? I've set up a manual IP address for the desktop, which is just out of the router's DHCP scope, and manually specifies the correct DNS entries (but remembering to set the router as the gateway). Browsing is now just as fast on Ubuntu as it is on Windows.

Sorted!

Friday, 29 May 2009

Thoughts on Unix File System Structure

There is a school of thought that believes that the Unix File System layout (or Filesystem Heirarchy Standard) is needlessly complicated. Some would go as far as to claim that it is fundementally broken.

The recent post on OSNews by Thom Holwerda (and especially many of the comments) provide some good examples of the perceived problems and the oft touted solutions.

There is a counter argument though, which goes something like this: "the Unix File System has been evolving for well over 30 years, isn't it strange that no-one noticed just how broken it is?"

Let's look at some of the arguments for and against the current system.

To make this easier, I'll pick up on some of the comments and see if they can be answered.

One quick point. In most cases I'm going to refer to Unix where this covers all Unix based operating systems. If I say something specific to GNU/Linux or another operating system then I'll name it.


I'm just trying to explain that many people are put off diving further into the intricacies of the computer simply because of how daunting everything is. By making a system easy to use and understand not only at the very highest level (the UI) but also all the levels below that, we might enable more people to actually *understand* their computers better, which would be beneficial to *all* of us.

I am of the strong belief that there is no sane reason WHATSOEVER why we couldn't make computers easier to use and understand on ALL levels, and not just at the top - other than geek job security.

This is a good place to start. The layout is there to simplify maintenance of the system, not to complicate it. This is nothing to do with "Job Security" - more to do with making a usable, maintainable system. Having people dipping into the OS structure (whether it be Windows, Unix or MacOS) would create MORE work for the support geek, not less.

I'll give you a real life example. Around ten years ago I installed Linux for a friend. This was back in the days that installing it was still a bit of an art. At the time getting XWindows up and running was cause for celebration, and as for working sound, IN YOUR DREAMS BUDDY!

After an hour or so of fiddling around with the config files, xf86config (remember that?), making sure that the correct packages were installed I gave him a quick run-though of how the system worked. As he had come from a DOS/Windows background I'd configured everything to look pretty similar, and showed how the basic commands worked (use "ls" instead of "dir", "cd" works about the same, "rm" instead of "del" and so forth) and gave a quick guided tour of XWindows, X11Amp and the other installed goodies.

He collared me the next day: "I thought you said this Linux stuff was stable. I restarted it and now I can't get back in! It's shit!"

On further investigation what he had done became apparent. He'd had a wander through the file system, picked some files that "didn't look important" (including /etc/passwd in this case) and deleted them to free up a bit of space.

This doesn't just happen with Linux. A year or so later the Telecoms manager at my company phoned me because his PC wouldn't boot any more. He'd been trying to upgrade Internet Explorer to the latest version and had run out of space on his C:\ drive. He'd managed to find a folder that didn't look that important but "had a lot of stuff in" it that he "didn't need" and deleted it. His PC had crashed part-way through and now it wouldn't start any more. Sadly the junk folder he'd chosen was called C:\WINDOWS.

OK, so these are vaguely amusing war stories but what is my point? Well, my point is this: Users don't understand operating systems. I'd go as far as to say that they shouldn't actually have to. In the majority of cases the best thing that a user can do is to not mess with the underlying OS at all. Hiding as much of it as possible from them is A Very Good Thing Indeed.

As is traditional at this point, lets turn to our old friend the analogy. Many people drive cars. You sit down, turn the ignition, grab the steering wheel, press down the accellerator and off you go (yes, I know that there is a little more to it than that, but you get the general idea). Now, how many drivers could strip an engine? How about the gears, know how they work? Could you strip the gearbox down if you needed to and reassemble it in a working condition afterwards?

The fact is that you don't need to know the mechanics of a car in order to drive one. Although anyone could pop open the bonnet and have a root around inside most people don't. If there is a problem, they take it to a garage.

Of course, some people DO tinker with their cars. They take a great deal of pride in being able to maintain and even customise their car. Is what they do easy? Of course not. Can anyone do it? No. Only an idiot would imagine that everyone can do everything, some degree of knowledge or learning may be required. This isn't meant to be an insult, but it is a fact.

To come back to the point of the analogy, is the Car any less useful because people don't understand how it works? Of course not.

This follows through to computers. Most people can use their computer quite happily with no idea of the underlying mechanisms. If they have problems then they can get in touch with their friendly neighbourhood technician. There is nothing stopping them learning about it if they want, just don't expect it to be easy. Just like a car, an operating system (and its component parts) is made to fulfil a function, not to be played around with.


pulseaudio is yet another layer on top of a broken audio foundation. Adding layers does not make things better, it just hides it a little longer.

Another good example of mistaken thinking. Abstraction can be a very good thing, and pulseaudio is an excellent example of this. Let's see how this works.

Just for the same of argument, lets say we were trying to write a simple audio player on GNU/Linux. Now, how do you make it play sounds? At a very basic level you might write directly to /dev/dsp. So now your app plays sounds. It might lock the /dev/dsp device but hey, this is just a simple example.

Let's up the stakes a bit and try and port the app to, say, Windows. What happened to /dev/dsp? It doesn't exist. How about MacOS X? Nope, not likely to work here either.

How does this relate to abstraction? Well, if our audio app uses pulseaudio to plays its sound it will now work on any platform that pulseaudio is supported on. For something like KDE that is aiming to be a cross platform environment this makes coding your apps an awful lot easier.

In other words the GNU/Linux audio foundation isn't broken, it just doesn't exist on Windows.


Why bin? Because that's where your 'binaries' are, right? oh, except there are programs now that are text files run through an interpreter, so that doesn't really apply. A user's files aren't under /usr, my webserver by default isn't under /svr, it's under /var/www. /etc? Yeah, something about etcetera really says 'config files'. Seriously, who thought /etc was a good name?

This is the biggie. To answer this, it is necessary to look at and understand where Unix came from.

First, another quick experiment. Try and find a Unix reference manual from any time in the last twenty years or so. The command references are still likely to work. Any shell scripts (providing you are using the modern version of the same interpreter) are also likely to work without any changes.

In the earliest days of Unix space was at a premium. Shorter command names meant shorter scripts (and less space in the file allocation tables). This is why the "base" commands are only two characters long, for example, "ls", "cd", "rm", "du" and so forth. Although we don't have the same physical limits these days there are a lot of scripts out there that rely on the short versions of the file names. Keeping them the same means that people don't have to re-learn all their skills with each new release of the OS (something that Microsoft could learn from).

This also follows through to the file system layout (again, I'm going to simplify this a bit, but hopefully you'll get the idea).

At the root of our Unix system we find these main folders:


/ -- root
/bin -- binaries
/sbin -- system tools (ie. fdisk, hdparm, fsck)
/lib -- libraries
/etc -- configuration files / scripts / anything that doesn't fit
in the other directories



These are the most basic parts of your Unix system. These are the base commands and libraries that are required to give you a bootable system with access to a network.

Moving down the tree, we come to /usr.


/usr -- root
/usr/bin -- binaries
/usr/sbin -- system tools
/usr/lib -- libraries
/usr/etc -- configuration files / scripts / anything that doesn't fit
in the other directories



This is the next level up. /usr is NOT where user files are stored, or for user generated versions of applications. In this case "usr" stands for Unix System Resources (although originally this was the location of users home directories). This is where the vendor provided files live (the stuff that isn't part of the standard base files). For those who argue about everything being shoved into /usr by Ubuntu, RedHat or whoever, this is actually where they SHOULD go. Anything in here should have been provided by the distro maintainers. Between / and /USR that should contain everything that your operating system needs. All applications, all configuration files, everything.

So what about /usr/local?


/usr/local -- root
/usr/local/bin -- binaries
/usr/local/sbin -- system tools
/usr/local/lib -- libraries
/usr/local/etc -- configuration files / scripts / anything that doesn't fit
in the other directories


The /usr/local section of the file system is where any binaries that YOU create are stored, along with their configuration files. If you wanted to create a custom version of any application it should appear in here. This keeps your stuff separate from what the vendor provides, and in theory prevents you from permanently damaging the operating system. If you do manage to balls things up totally then deleting /usr/local should be enough to fix it again (as all the vendor provided files should still be intact and untouched).

Another benefit of this approach is that once your root system is installed, the actual location of /usr becomes irelevent. It could just as easily be on a shared network drive as it could be on your local disk. If disk space is at a premium this can be a very effective way of working. It also means that every users has the same base system, because they are running the same apps from the same place.

OK, so thats not as useful for a single user system, but it is still functionality that is used in some places. Just because YOU don't use it, doesn't mean it isn't useful.

Before anyone pipes up yes, I am fully aware of /opt, /var, /tmp, /dev and so forth. All of these have their uses, but are not relevent for the purposes of this discussion.


For a start, it has a gaping hole: he doesn't explain how you separate "System" from "Programs".

That's a big giant gaping hole in Linux, not in Thom's proposed filesystem layout. There's no such distrinction in a Linux distro, as there's no such thing as "the OS" vs "the user apps". Once someone gets the balls to stand up and say "this is the OS" and package it separately from "the user apps", the FHS will never change.

Actually GNU/Linux and Unix already does separate the OS from the User Apps. Remember our three levels? The bottom level is the OS - the bit you need to get a working system (ie. /bin /sbin /lib and so on). Anything in /usr or above is a user app. Yes, you may see XFree86 as essential, but GNU/Linux can run without it. Same for Mozilla, and FireFox and anything else in /usr or /usr/local.

* * *

The biggest problem there is with operating systems in general (not just GNU/Linux) is that for some reason people assume that it should all be easy. The desktop is easy to use therefore the underlying system should also be easy to use.

This is a very strange form of logic. Simplifying where necessary is a good thing, providing it doesn't impact on functionality or reliability. To go back to our car analogy there would be an argument for simplifying the innards of the car to make it much easier to understand and maintain for the common user. As a thought experiment, let's try it.

Let's start with the gearbox. Much too complicated and a potential point of failure, choosing a good default gear should do away with the need for that. How about a petrol engine? All that internal combustion malarkey sounds a bit dangerous to me. Running a vehicle based on small controlled explosions? Stuff that for a game of soldiers! Let's replace that with an electric one. But wait, maybe some people don't understand how an electric motor works either. So on second thoughts, let's replace it with a pedal driven one.

Hmm, it's a bit heavy to pedal, so lets remove most of the metal bodywork, a canvass roof should suffice (plus it's easy to repair or replace).

Anti-lock brakes? They'd have to go as well. Disc brakes are much simpler. Power streering? Not really needed now, drop that too. We can also leave out the airbags as we won't be going that fast now anyway.

So what are we left with? Basically a four-wheeled bicycle. Handy in some circumstances, easy to maintain but not necessarily as useful as what we started with.

Yes, this is taking it to the extremes, but that is the equivalent of what people are suggesting is done to the Unix file system. Let's remove everything that we don't understand the reasons for and just use what is left. Sadly what is left may be easy to understand, but its functionality would likely be crippled.

* * *

Does any of this mean that people (like GoBoLinux for example) shouldn't experiment and try different things? Of course not. Finding new (and potentially better) ways of doing things is something that can end up as a benefit to everyone. But making changes for the sake of being different is not so good.

Looking closer at GoBoLinux it is adding one hell of a lot of complexity to the system in order to just keep things working (have a check of http://www.gobolinux.org/index.php?page=at_a_glance and ask yourself about all the symlinks), whilst loosing some of the benefits of the traditional Unix system.

Reading http://www.gobolinux.org/index.php?page=doc/articles/clueless gives plenty of information on why GoBoLinux have chosen their approach. It also re-inforces some of the points made above, especially with regard to the three-tier approach of traditional Unix.


* * *

In the end, used properly the current Unix File System Layout actually works rather well, changing to something else isn't going to solve the problems of people ignoring a standard. All it will achieve is change for the sake of it, and chances are some benefits will be lost in the process.

Wednesday, 23 April 2008

Mac shares on Server 2003

Todays challenge was to get file sharing working for MacOS 9 clients on our new Windows 2003 server.

Installing "Services for Macintosh" is straight forward, and as such I won't go into that here (if you are stuck at this point then here is a hint for you: in "Add / Remove Programs" there is a "Windows Components" section, you install it from there).

Once it was installed, I created a new Macintosh share (and received a warning that the share would be created as read-only for mac clients).

But when trying to browse to the server from the Macs - I couldn't see the server at all.

The reason for this turns out to be simple. We have two onboard network adapters on the server, but are only using one of them. The AppleTalk network protocol will only work against a single card in the machine, and it had associated itself with the wrong one. Ticking the "Accept inbound connections on this adapter" checkbox in the "Appletalk Network Protocol" on the active card (and selecting the correct Appletalk zone from the pulldown list) was enough to get this working.

Well, nearly. The server showed up OK, but none of the clients could connect to it. They get an authentication error instead.

To get this working, first launch the server manager (right-click on "My Computer" and choose "Manage"), then right-click on "Shared Folders" and select "Configure File Server for Macintosh". The default authentication type will be "Microsoft only". Change it to "Apple Cleartext or Microsoft".

Now the Mac clients can log in.

But wait - the share is still read only!

To sort this call up the properties of the share (in the server manager) and untick the "This volume is read-only" checkbox that is selected by default(!).

And there we have it. Job done!

Saturday, 2 February 2008

XP on old hardware

Regular readers may remember the "fun" I had trying to get my el-cheapo wireless cards running under Linux. The USB adapter as it turns out is broken (it only runs for ten minutes at a time before crapping out), the WalkLAN PCMCIA card doesn't run properly under Linux although it is supposed to be supported (it locks the system solid).

To test the card out I decided to re-install Windows 98SE and test the card with that - but wouldn't you know it, I can't find where the install CDs have gone. Fortunately I'd a spare XP Pro license so for an experiment I've installed that instead.

The actual install was straight-forward, if a bit slow. I've got to admit I wasn't expecting too much performance-wise from this once it was installed. After all, a Celeron 433 with 160meg of memory is hardly an XP powerhouse.

How wrong I was.

It runs like a charm. It is fast to boot, all the hardware appears to be supported (mind you, I'd have been surprised if it wasn't) and after tracking down the driver for the WalkLAN card the wireless support is spot-on.

At the moment to stress-test the card I'm installing all the required updates to Windows XP (nearly 100 of them!) which should take over an hour to do. And while I'm doing that I'm also browsing the web, and writing this in Blogger. Internet Explorer, while it isn't my favourite web browser by a long chalk, is actually performing well, and is certainly keeping up with my typing.

The battery life seems OK too. I'll test this a bit more over the weekend and update this with my thoughts on it, but looking good so far!

Update 04/02/2008

I've been pleasantly surprised with how well this runs. OK, so it's a bit iffy for YouTube, but that's more down to the display than the Laptop (lots of moving stuff tends to blur quite a bit). But beyond that small gripe the rest of it is fine and dandy.

Hibernate works well - it shuts down quickly and restarts from cold in well under 40 seconds. This is probably down to the fact that it only has 160 meg of memory to restore.

Browsing works well, both in FireFox and Internet Explorer. The wireless card has worked flawlessly - even during the kids mammoth CBeebies browsing session lasting a couple of hours.

Before anyone asks, no, I'm not becoming a Windows convert. If the wireless card had been supported properly in Linux I'd have still been using that, but until I get the chance to swap the card for something better then I'm pretty much stuck - especially as I'd need to prise the wife away from the laptop with a crowbar to get her off Ebay. . . .

A couple of things that are pretty good to install to get that good freeware feeling back:

Rocket Dock - A pretty good MacOS style dock copy. Although officially it needs a 500 processor (or higher) it works really well on a 433.

The Gimp - Needs no introduction really. My image editor of choice.

Open Office - The Microsoft beating free software office package (well nearly anyway).

Friday, 7 September 2007

Nobodys Working #2

Back on the coalface, here we are playing with the new Mail Server.

For me the jury is still out on Merac. I've got it sending and receiving email for our test domain - which is good.

The server collapsed in a heap the second one of my managers asked to see it in action - which is less good.

Now I have been configuring a lot of stuff on the test server today, and running it on a virtual server is less than ideal (at least, one using SATA drives like ours), so I'll have to keep an eye on it and see if this happens again.

Having incoming and outgoing mail working certainly makes me happier about the product - now I'll have to find out how to configure it to do everything that we want.

For reference, when it comes to diagnosing problems with email servers SMTPDIAG is your best friend in sorting things out (on Windows at any rate). Between that, and realising that our current mailserver redirects all outgoing mail via an external proxy (so we needed an exception adding for our test domain) got everything working.

Next task: getting the calendar working.

And it looks like that task wasn't too difficult - apart from the fact that it isn't documented. Hmm, I can see a pattern emerging here.

By default, when a user clicks on the calendar button from the web interface they'll get an error message "Could not login to calendar server".

To get the calendar functionality working, you'll need to do the following from the Merak admin tool.
  1. Select the GroupWare section
  2. Click the DB Settings button
  3. Click Create Tables and wait for the confirmation message
That's it. Job done. Easy when you know how isn't it?

Wednesday, 5 September 2007

Nobodys working #1

Well, here we are at work. What is nobody doing today? Installing a virtual Windows 2003 Server ready to start testing IceWarp's Merak Mail Server.

(How far along is it?. . . . Still formatting C:.)

We've already bought it - but this is just to get a bit of experience before the real install once the license / server hardware arrives.

One thing that surprised me is that it doesn't have it's own database backend - instead it relies upon either an Access .MDB file (yuk!) or an external database. For me it's a toss-up between MySQL and Microsoft SQL Express. As the price is the same for both (free) and seeing as I already run SQL Server 2000 servers here then the obvious choice is SQL Express.

Dear reader, don't take this as a slight against Open Source. I really like Open Source products. I run Linux at home as my main OS and have done for around five years now, but at work - well, standardisation is king. This is also because I won't be the only person maintaining this and, well, Microsoft's SQL Server (and Express too) have very good management tools. Yes, MySQL is getting better, but it isn't there yet.

(Copying files. . . .)

So on to Merak. Not a particularly well known product, at least, I've never heard of it before now. One of it's selling points is that it fits in well with Exchange clients, supports shared calendars, has loads of plugins to extend the functionality of the server, and it is a damn site cheaper than Exchange. It is also cheaper to add additional user licenses to it. Apparently we've bought 1000 user licenses for this.

(Please wait. . . Oh, we've rebooted. Hang on a tick, I'd better get the rest of the install under way.)

Merak can run on either Linux or Windows. It looks like it supports most of the major Linux distros. As we are integrating this into our Active Domain structure we're having to use Windows Server 2003.

("Setup will complete in 39 minutes". Yeah, sure it will.)

All the files are available on the Merak website, and will run quite happily for 30 days before you have to register it. Once you give it your license key then the demo becomes the full, unrestricted version.

(Adding the machine details. One question. Which berk decided to make Tijuana the default time zone for Windows installs?)

Once the initial server install is done, then I'll install SQL Server Express. I've not had chance to play with this yet, so if it turns out to be crap then I'll switch over the MySQL. Hopefully it won't come to that. Let's have a look at what it provides, while I'm waiting for the server install to finish.

. . . .

Scratch that, we're going to have to use MySQL. The limits on the database size and number of processors would cripple the mail server. Thank you Microsoft.

("Installing start menu items. Setup will complete in approximately 19 minutes". Liar. )

Right, I'm off to lunch now. Hopefully the server will have finished the initial install by the time I've eaten my sandwiches.

. . . .

OK, so Windows 2003 is now installed and running as a terminal server. MySQL (and the tools) are installed, so it's time to get on and install Merak itself.

. . . .

Well, installing MySQL was nice and straight-forward. It was an object lesson in clicking Next, Next, Finshed. Installing the tools was just as easy. The only small gotcha with it is to remember that by default you must connect to the databases via localhost rather than by the machine name.

Installing Merak is pretty straight foward too. During install you get two choices - Easy and Advanced. The easy option installs the Access DB backend and is recommended for 100 users or less. For any more than this you should use the advanced option and then select the required backend. And it does support plenty of them too. We chose the MySQL backend, entered our login details, realised that we had to create a blank schema in MySQL for Merak to create its tables in, and then let the installer do its job. Finally we added our administrator account and set up the mail domain.

So far, so good. Now to play with the server and see what it can do.

. . . . Later . . . .

That took some getting going. In fact, even though it isn't documented it took a reboot before any of the mail accounts could be accessed.

The web admin tools seem nice enough, and the web mail client is pretty neat. In fact the default interface is nearly a clone of Outlook 2003 - and includes calendering functionality (which I can't get working yet).

I've been able to send emails OK - but not receive them yet. I'll have to double-check the MX records for the test domain.