Linux 20 years ago – RedHat 4.1 on the PCW magazine coverdisk

20 years ago virtually nobody had fast internet. If you were outside of Government, big Corporations or Academia your Internet experience was most likely just a slow dial-up modem. This was great for exploring the early WWW or sending emails but asking someone to download enough software to leave the confines of DOS/Windows and start to explore Linux was daunting.

As explained in my other article vital to the growth of Linux back then were the itself relatively new technology of computer magazines coming with first CD-ROMS and then DVDs attached to their cover. You still see cover DVDs on many magazines with with widespread high speed broadband it is more convenience that necessity nowadays.

The most important task I had as the organiser of the UKUUG Linux SIG back then was to enable more people to get a chance to try Linux, which even back then was a far better way to use the computers available at the time than Windows was. I worked with several of the main Linux vendors at the time to get bare-bones versions of their various offerings onto magazine cover disks. For the May 1997 Personal Computer World magazine (the biggest in circulation at the time) I arranged for version 4.1 or RedHat Linux to go on together with the HTML version of Mat Walsh’s book and a pretty complete set of the Linux Howtos. In all this saved people many expensive hours of downloading.

PCW Coverdisk containing RedHat 4.1 Linux
PCW Coverdisk containing RedHat 4.1 Linux

I also wrote some install instructions at the time and a longer article called “The Battle for the Desktop” both are reproduced here for posterity 20 years on.

First Steps in Linux for the timid

by Martin Houston
So you think you might like to see what Linux is all about?

For those who do not yet know, Linux is the wonderful new Operating
System that builds on the solid foundations of UNIX and can act as a complete replacement for DOS/Windows or Windows 95.

The really remarkable thing about Linux is that because it was developed
by a large team of skilled volunteers over the Internet it is available
for low cost from many places.

On the cover CD of this magazine you have in your hands Red Hat Linux 4.1 a complete, modern distribution of Linux that you can install on
as many computers as you like – free of charge. Not only is the software
provided but you will also find all the documentation about Linux that you are likely to need including much in easy to use HTML format. The HTML format documentation comprising of HOWTOs on many subjects and Matt Welsh’s “Linux Installation and Getting Started Guide” can be found in the \HOWTOS and \INSTGUID directories on the CD and should be readable from within Netscape Navigator or another Web Broswer from within Windows.
Other documentation for Linux makes use of long file names and
special document formats that will only be available to you once you take the plunge and put Linux on your system.

The purpose of this article is to take you through the installation of
Linux on your PC in such a way that you can later change your mind if
you find that Linux is not for you. Of course if you know that Linux
is for you the best thing that you can do is completely re-partition your system so that it runs only Linux and DOS is relegated to the Linux DOS emulator DOSEMU.

Most people will however be happier with a ‘Dual Boot’ system that allows Windows and Linux to live side by side, even with a moderate amount of peace and harmony.

Firstly PLEASE BACK-UP YOUR COMPUTER. Following the instructions here have not resulted in any data loss for me.. But your machine may be
different, or you may just make a mistake following instructions. If
you do not have an easy method of backing up then may I suggest the
purchase of an Iomega Zip Drive. This unit has an advantage that you
can also make use of it later under Linux.

When you are sure that you have a complete and verified backup the next
stage is to spring clean your machine so there is room for Linux. To
give yourself a fair chance of seeing all that Linux has to offer at
least 150 MB of unused disk space is required.

If you have a hard disk of 1 GB or more consider allowing at
least 300MB for Linux. It will be considerably more difficult adding
more space later if you are mean at this stage.

How you do this spring cleaning I leave to you. Ask yourself “Do I really ever use?” various software packages that we all seem to collect, especially from magazine cover disks 😉

I am now making the assumption that you are running a single OS on your
PC, either DOS/Windows or Windows95 (where DOS is alive but gone into hiding).
I am also assuming that your hard disk is only visible as a single drive
letter C:. If you are already running with some of your hard disk
visible as other drive letters then please make sure that you understand what you are doing and how your hard disk is partitioned rather than following these instructions blindly.

The next stage will be running your operating systems de-fragmenter to
move all files that remain to the beginning of the disk. There are many
different de-fragmenters such as Norton Speed Disk if you have an old
DOS system or Microsoft’s DEFRAG.EXE if you have DOS 6 or Windows 95.
Force the defragmenter to do the most through job even if it claims that the disk is only slightly fragmented.

What we really want it to do is shuffle all the empty space to one end of the disk so that we can re-claim it for Linux.

When the de-fragmenter has done its job make yourself a bootable
floppy disk

FORMAT A: /s

and copy the FIPS.EXE program from the dosutils directory on
the CD.

You now have a choice of two actions, either add the appropriate DOS device driver for accessing your CD drive to this disk or create a separate Linux boot disk.

To make a Linux boot disk you will need another floppy disk.
Format this second floppy disk to be your Linux installation disk. It
is important that this disk has no bad sectors reported by format.

Run the RAWRITE.EXE program (also in the dosutils directory) answering
A as the destination drive an \images\boot.img as the file to copy. This will make you Linux boot disk.

Incidentally this boot disk can also be used to install Linux systems over a network by NFS once your first Linux machine is up and running.
You can install hundreds of machines this way all from this single CD.

Next boot off the DOS floppy disk.
If you followed the first suggestion of adding CD device drivers to your DOS floppy now would be a good time to boot the disk and check that you can see the CD – you will later be using the autoboot.bat command located in the dosutils directory so make sure that you can see it.

Please take time to read the FIPS.DOC file in the same directory before running FIPS.EXE.

The purpose of this program is to let you divide your existing C: drive into a new smaller C: drive and a new empty D: drive. As the D: dive is empty (remember we moved all existing files to the start of the disk) it can later be re-allocated for use by Linux.

Why have you made a bootable floppy disk to run FIPS from? This is so that a copy of the original partition table can be kept there so we can later restore the single large C: drive if you want to give up on Linux.
Label this floppy and keep it safe.

Now either change directory to d:\dosutils and run autoboot if you can see the CD from your DOS floppy disk or boot off the Linux boot disk that
you made earlier with the rawrite program.

You will soon reach the section on partitioning the disk and be placed in
a program called fdisk. This does a similar job to the DOS FDISK program but looks different.

This is a typical session through the fdisk program.
Commentary is marked with a # sign and does not form part of what you see

# How to partition a disk for Linux
# do not type anything that starts with a hash
# its just explanation.

# fdisk will prompt you 'Command (m for help):' Type p to see existing
# partitions.

Command (m for help): p

Disk /dev/hda: 14 heads, 36 sectors, 830 cylinders
Units = cylinders of 504 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System
/dev/hda1   *        1        1      384    96750    6  DOS 16-bit >=32M
/dev/hda2          385      385      830   112392    6  DOS 16-bit >=32M

# here we see a disk with two DOS partitions as created by FIPS.EXE
# We kill this second DOS partition by typing a d then 2

Command (m for help): d
Partition number (1-4): 2

# see what we have done by typing a p
Command (m for help): p

Disk /dev/hda: 14 heads, 36 sectors, 830 cylinders
Units = cylinders of 504 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System
/dev/hda1   *        1        1      384    96750    6  DOS 16-bit >=32M

# See it's gone
# Now we make a Linux data partition with n,p,2
# The partition goes after the remaining DOS partition
# but does not use up the whole disk. You may need to
# create and remove several times to get a suitable residual size
# for the swap partition.

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (385-830): 386
Last cylinder or +size or +sizeM or +sizeK ([386]-830): 750

# we leave a generous bit for the swap partition
# Show the result with a p

Command (m for help): p

Disk /dev/hda: 14 heads, 36 sectors, 830 cylinders
Units = cylinders of 504 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System
/dev/hda1   *        1        1      384    96750    6  DOS 16-bit >=32M
/dev/hda2          386      386      750    91980   83  Linux native

# there it is!
# now we use the rest of the disk up for swap

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (751-830): 751
Last cylinder or +size or +sizeM or +sizeK ([751]-830): 830

Command (m for help): p

Disk /dev/hda: 14 heads, 36 sectors, 830 cylinders
Units = cylinders of 504 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System
/dev/hda1   *        1        1      384    96750    6  DOS 16-bit >=32M
/dev/hda2          386      386      750    91980   83  Linux native
/dev/hda3          751      751      830    20160   83  Linux native

# must change the partition type so the partition will
# be recognised as swap not data

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 82
Changed system type of partition 3 to 82 (Linux swap)

Command (m for help): p

Disk /dev/hda: 14 heads, 36 sectors, 830 cylinders
Units = cylinders of 504 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System
/dev/hda1   *        1        1      384    96750    6  DOS 16-bit >=32M
/dev/hda2          386      386      750    91980   83  Linux native
/dev/hda3          751      751      830    20160   82  Linux swap

# write out the table we have if you are happy with it
# or q to abort without making any changes.

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
(Reboot to ensure the partition table has been updated.)
Syncing disks.

# That's it!

The swap partition should be at least 1.5 times as big as the physical memory that you have, i.e. 16MB of memory 24MB swap. Linux
can use swap space on a file-system like Windows does but the speed
improvement of dedicating a separate area of the disk to virtual memory
is well worth doing.

Each partition has a type associated with it. Linux data partitions are type 83 but if you need to use the ‘t’ command to chance the type of your desired swap partition to 82. Linux will know that any partitions of type 82 are usable as virtual memory swap space. Unlike Windows you can have multiple swap areas on the same or different disks. Linux will use the available swap in the most efficient way possible.

When you have a partition table to your liking – with partition 1 as your new smaller DOS/Windows partition, partition 2 as your Linux data partition and partition 3 as your swap partition you can commit your work by using the w command.

You can then carry on with the installation procedure and start exploring the delights of Linux.

The Battle for the Desktop (Note this article is from 1997)

By Martin Houston

Introduction

As computers get faster the way that we are using them is changing. Systems of a power level that, 10 years ago, was found only in big and expensive departmental machines is now commonplace in desktop and even portable equipment.

The onward rush of faster and cheaper PC equipment has through the 90s lead to a move away from the traditional centralised systems to great tangled webs of semi-autonomous PC based systems.

The explosive growth in PCs, with the relentless logic ofeconomies of scale and desire to standardise has
also given the opportunity for a single company, Microsoft, to virtually dominate that industry with widely used but still totally
proprietary technology which many believe has been forged to benefit the monopoly holder rather than the user community.

It is now a widely accepted fact that PCs, because they are so complex, have high support costs, much higher over their useful
lifetime than original equipment cost.

The other big problem with PCs is that they just keep getting faster. This does not seem like a problem if you are about to
purchase but just consider that next years leading software applications will be written to show off their best features on
next years hardware. Your system either has to be stuck in a time-warp with the application versions which were state of the
art when it was purchased, or struggle running newer applications.

“Software bloat” is sometimes portrayed as a great conspiracy between hardware and software manufacturers. Hardware manufacturers want
you to buy new hardware, even though what you already have may have several years of useful life left in it and software
manufacturers want your upgrade business so they invent new ways of using up processing power and disk space. The truth is not this
stark. Simply computers, fast as they are, will always open up to new previously impossibly intensive computations that open up a new mass
market desire to reap the benefits of them. As an example 10 years ago 3d perspective games were very primitive – limited to wandering around a stark maze. Now we have complete gory combat!

Breaking the cycle

Linux is an industry phenomenon that has been driven almost entirely by users rather than hardware or software companies.
The purpose of Linux is to bring the powerful philosophy that has made UNIX the dominant OS on multi-user machines to bear in
tackling the PC problem. Linux gets its name from Linus Torvalds, an exceptionally bright student at the University of
Helsinki in Finland. As a project Linus wrote a rudimentary Unix-like OS that was small and specially optimised for the
Intel 386 processor. This in itself was good news as it meant that usefully fast Linux systems could be afforded by anyone who
wanted to take an interest. The really significant action that Linus took however was to publish the sources for Linux on the
Internet and invite other interested parties to take part. This created what can only be described as an inferno of activity.
Several years of pent up frustration at the onward march of increasing powerful PCs shamefully crippled by pathetic Operating
Systems was released in a frenzy of development work. The seeds of Linux’s eventual success were firmly planted. Richard Stallman’s Free
Software Foundation had already been active for several years and had built up a very useful collection of free (and superior)
versions of the standard UNIX system programs such as C compilers, shells and editors. The FSF had its own free kernel
project called “The Hurd” but it just wasn’t attracting the interests of enough developers to be going anywhere particularly fast. Berkeley were in the process of freeing up BSD UNIX so that it could be targeted at the PC marketplace but was beset by two problems,
firstly that BSD was simply too big for most people to run and secondly waters were muddied by legal disputes over ownership of
some of the code which put many people off making time investments in something that could, one day, be taken away from them.
Linux was small, efficient, the software equivalent of a green field site. This is what attracted hundreds if not thousands of
dedicated developers who put time and effort into making Linux work. Partly this was out of a sense of challenge but with a
real pay-off in that at the end of it all the users had an OS that they could fix if it did not work for them.

Freedom is the key

The main reason why UNIX never made much impression on PCs in the past was that of cost. In the mid 80s 386 based PCs made
great UNIX systems and companies like SCO and Interactive made some headway in the market. However the typical price of a
complete suite of UNIX software, even for a 386, was in the region of 1,000 pounds. At this price people who knew in the back of
their minds that UNIX would be a wiser choice went instead for the cheap but unsatisfactory solution of DOS/Windows. Why was UNIX so
expensive? Although much of the original work on UNIX was done by Universities and given back to AT&T free of charge the
‘commercial exploitation’ of UNIX led to a suffocating liability of per-copy royalty payments; to AT&T, to Novel, and
surprisingly even to Microsoft. Microsoft got involved in the commercial UNIX market place early on with its own deliberately
incompatible variant called XENIX. Microsoft’s now legendary marketing skill meant that XENIX was the dominant UNIX system on
286 and then 386 PCs and as a direct result for several years was the most common form of UNIX, even though the system calls
had been changed to become proprietary, and so became the most popular target platform for UNIX application vendors (are we
going to be stupid enough to let history repeat itself with Visual J++ Vs Java I wonder?).

The UNIX community has had to pay dear for ‘XENIX Compatibility’ when the UNIX System V.4 came out Bill Gates was added to the
list of royalty recipients.

The situation with UNIX in that some contributors received royalties and others did not was unfair. The situation that much
of the OS source code was secret has led to situations where users are virtually held to ransom over software maintenance. In
contrast in the Linux system no royalties are due to anyone for anything. That is not the same as saying that people cannot make
a business out of supplying and supporting Linux systems. Why the users rather than the computer companies made Linux is that
such a market is a free market, open to competition. You are free to self-support Linux or contract that support to someone
with better skills and resources. As the source of everything is available there is no excuse for problems not to be resolved.

Linux is Leverage

Linux is ideal for use within large organisations that choose to employ their own support staff. Linux with the readily available
documentation and Internet based communication with other users means that your staff can self solve problems rather than just
passing on information to and from the software vendor. This is both good for staff morale and means that skill levels are always
increasing. It also means that your staff can feed back improvements into the Linux software base. This is what I mean
by leverage. By using Linux your staff can increase productivity by having access to the ideas and expertise of others with some
reciprocation in the opposite direction. However as knowledge shared is knowledge gained the leverage of fully co-operating
with other rather than just ‘taking’ is very worth-while.

Many large organisations have already discovered this and have environments where Linux and other UNIX systems work together
seamlessly. NASA are heavy Linux users including several large multi node parallel systems for stellar simulation work. Linux
has even been into space! An IBM Thinkpad was used to control experiments on a recent Shuttle flight. Several US Utility
companies run Linux systems as data collectors.

Linux is also a major player in the infrastructure of the Internet. At least 9% of World Wide Web sites are Linux based
and it is also used by many Internet Service Providers. Linux machines, even quite humble ones, make great fire-walls, routers
and even file and print servers for existing networks. A single Linux machine can allow files to be shared between Novell, SMB,
Appletalk and NFS.

People use Linux when they want reliability, ease of maintenance and an attractively low cost. Unlike an ill fated decision to
buy into Windows the low cost of Linux should not be your main factor in choosing it!

The Role of Java

Java is a Unix technology. Sun has done a clever PR job on de-emphasising this so as not to frighten PC centric managers.
Like the UNIX technology before it, C, Java has the potential to unite different hardware platforms and provide portability.
Unlike C this is all done without a need for conditional compilation as Java uses the concept of a ‘virtual machine’ so
that Java programs can be the same even when underlying hardware is different.

Java is a fairly ‘low level’ language. It will probably replace many of the jobs now done by C or C++. The place where Java
will make the biggest impact is as an implementation language for the next generation of packaged software. At present
Microsoft has such a dominant market position because it is too hard to port existing Windows software to other systems. This
means that the bulk of Application software only ever gets written for Windows (which is a hard slog) and technically
superior OS platforms like Apple Mac, Unix and Linux are starved of Application choice.

Within 6 months Java will put this situation on its head. Apart from Microsoft, who must secretly wish it would all go away, Application
vendors are sinking billions of dollars into producing Java based software such as general Office applications. They have
the benefit that once Java software is written it will never have to be ported to new hardware again. The Java virtual
machine will provide all the resources needed for the application to do its task. There is an added benefit in that
because an application is running in a virtual machine there is less scope for a rogue or Trojan application to wreak havoc.

In a way Java is bringing to application programmers the same sort of freedom that UNIX hackers with their shell and Perl
scripts have had for years.

The big benefit for Linux is that it has a Java VM (in fact a choice of several competing implementations) so will be
able to run all the new Java based software as well as any Windows system.

The Role of Perl

Java may be the answer to Application vendors prayers but it is too low level to be a suitable language to responding to fast
changing user requirements. Perl complements Java as it fills this role perfectly. Perl programs can be very rapidly
proto-typed but with proper software engineering can be reliable enough for full production use but yet flexible enough to change
quickly to respond to new needs. Perl is a much higher level language than Java without a huge speed penalty for being so.
Speed critical parts of Perl programs can always be re-coded in C or C++ if needed. Language profiling support is provided to
ensure that the re-writing process does not get out of hand. Perl has full support for Object Orientated and Client-Server
programming. One particularly interesting Perl technology is Penguin which allows machines to pass each other
cryptographically signed packets of Perl code to be executed in a controlled environment. Penguin means that other machines never
have to be trusted any more than needed to perform the required function leading to a robust system. An ideal implementation of
Penguin would be to do an SQL query on a remote machine but with some custom filtering on the intermediate result before return
to the caller. This is a way that ‘variable width’ clients can be constructed to maximise overall system utilisation. One very
beneficial use for this would be an application specific convention of abbreviating returned data so that the slow part of network transfer was shorter. Smart abbreviation would achieve much greater effective
network speed than data compression alone.

Linux as a turn-key system

The concept of multiple users is something lacking from all versions of Windows apart from NT. The concept is however a
valuable one if a system is to be used by people who have no interest in, or business in changing, the way that the computer
is set up.

Linux can be set up so that the system boots into X windows with all data areas mounted by either NFS, SMB or Novell networking
protocols. The window manager can be configured so that only specific business functions are available from the desktop.
Novice users can even be denied interactive access to a system shell. Or access to a restricted shell that only permits safe
operations, such as manipulating files within a specific directory but not being able to move out of it.

However the big difference is the full range of UNIX flexibility lies behind the menus on the users system. The
applications that the user invokes can be Shell or Perl scripts, locally running binary programs, xterms firing up with remote
programs and now, even full Java applications.

Unlike a Windows based PC which is stuck with traditional client-server or dumb terminal emulation with Linux power on the
desktop a sensible plan can be made about how ‘fat’ a client is needed for each task. At one extreme would be a program that ran
entirely on the desktop machine, getting its data by NFS. The other extreme would be complete remote execution with just an X
window display from the remote application.

The first of these choices would suit a job which required ‘greedy’ CPU use but not much IO. Having the task running
locally means that only the user that wants the task to run would be impacted by it. An example would be the calculations
required to produce fancy display graphics.

The second choice is for a business critical job that has high data throughput but a relatively small amount of that data is
fed back to the user. Many database operations fall into this role. A pure host-only solution is a bit of a cop-out in that
the cheap CPU power on the desktop is lying idle while the expensive CPU in the server is being asked to do all the work.
Such solutions do not scale well. The ideal system would be one where the optimum balance of local CPU utilization to network
traffic was attained. This is not easy to do as it must be remembered that networks are glacially slow when compared to
modern CPUs so moving data just for the sake of it being processed somewhere else is something to be watched for.

Clearly what is needed is the ability to use a mix of different client/server technologies to attain maximum throughput for the
system as a whole. Linux because it shares its technology with the big host UNIX systems is ideal for this role. Perl can be
used to construct various custom client/server scenarios quickly to assess the best way forward.

Confidence in the system

What puts many people off Linux is that being a collectively developed system there is no vendor to be held responsible for
defects. To put it bluntly, there is nobody to sue. However the notion of a central party being responsible for such a complex
system and therefore liable in law has serious negative effects which damage the integrity of the system as a whole. Centralised
control means collective knowledge is no longer available for understanding shortcomings and rectifying them. Even if bugs can
be identified in a centrally controlled system the information is simply not there to investigate bugs in a meaningful way so
bugs remain unresolved for months or even years. A proprietary OS such as Windows is like a public transport bus. It will take
you to where you want to go (sort of) but if it breaks down you are left standing at the side of the road. You are dependent on
the Bus Company to fix the problem or send out a replacement bus. The average bus passenger would on no account be expected
to have the knowledge or equipment to fix the bus. Linux is like a car, yes it can go wrong, or the new accessory that you have
fitted turns out to be a turkey. Unlike a bus simple problems with a car (like a flat tire) many people would be able to solve
for themselves, or at least know where to get an expert who can get the car going again.

Linux removes the mystery that the computer industry has spent the last 40 years cultivating. Armed with source code a Linux
computer is like any other piece of engineering – it can be fixed or even modified by anyone with enough knowledge.

As far as controlling this explosion of creative effort goes centralised project management on such a massive scale is
pointless. Linux works because people agree on interface specifications between components and programming is done
defensively. Unlike Windows; Linux & UNIX have the concept of file permissions to prevent users tampering with each other’s
data and controlled execution environments for code that is not totally trusted such as Java and Perl Penguin module.

Most people writing Linux software are writing it primarily because they need it themselves. The prospect of code being on
public display can only aid the natural desire for quality. Developers are generally getting so much leverage from being
able to build on the work of others that the quality of many packages is very high and getting higher.

With normal common sense in configuration control you should be able to expect a community of Linux systems to be manageable
with less instability and security threat than any proprietary OS.

Conclusion

Linux is a revolution in the UNIX world that is beginning to make in-roads into Microsoft’s PC homeland. After years of talk
in the UNIX community about Open Systems Linux is the first Open System that is also accessible to everyone.

Although it started as an Intel PC only OS Linux now runs on PC, DEC Alpha, Sun Sparc, SGI Mips, Power PC, Be, Apple Mac, Acorn
ARM.

As it is free from any licensing costs it is the best chance yet of the UNIX community arriving at a single unified OS to go
alongside the single unified Applications programming language of Java and the single unified scripting language of Perl.

Some Computer vendors, notably DEC and Apple, are actively funding Linux development for their platforms but others,
notably Sun, are hostile, seeing Linux as a threat to their revenues from proprietary UNIX variants (which indeed it is).

Linux offers immense cost and productivity benefits to corporate users who are big enough to have their own computer support
staff. A computing infrastructure with Linux on desk tops and taking some server processing roles means that support staff get
a chance to use skills rather than just being a message relay service to the OS vendor.

Linux is also great fun as it is so empowering – as millions of people world-wide have now found out for themselves.

Author: Martin Houston

This is my own little corner of the Internet. You will find a mixed bunch of stuff about Open Source (what I have done for a job for the last quarter of a century) and wider issues of what is wrong with the world. I am a freelancer so if you would like any software written (for money) get in touch!

2 thoughts on “Linux 20 years ago – RedHat 4.1 on the PCW magazine coverdisk”

  1. My partner and I absolutely love your blog and find nearly all of your post’s
    to be exactly I’m looking for. Does one offer guest writers to write content
    in your case? I wouldn’t mind publishing a post or elaborating on a few of the
    subjects you write regarding here. Again, awesome web site!

    1. No plans in wanting guest writers – as it says in the intro – this is just my own little corner of the Internet 😉

Leave a Reply

Your email address will not be published. Required fields are marked *