Back Online - AdrianG — LiveJournal
I think I figured out the problem. In fact, I've spent seven hours today wasting my time trying various things to get that system working to no avail, until I finally started figuring it out about two hours ago. Unless I'm mistaken about the issue, I shouldn't be having that problem again, and I'm back to being able to receive and send email as such.
It turns out that for those of us that use Linux, there is some sort of kernel bug related to using ReiserFS on a multi-processor machine, if we are actually using an SMP kernel. The bug cause me immediate problems when I installed Suse 9.1 a week or so ago, and I contained most of the problem by telling the kernel not to use the other processor. It turns out that as long as I was using an SMP kernel, even if I told it not to use the other processor, there was still a subtle ReiserFS bug waiting to bite me. Now I am on a single processor kernel, and I seem to be able to get past that problem now.
I really hope the problem is fixed for good, now.
Ummm, that new job I just got? Pogo Linux. We build these things for a living. And we don't use Reiserfs. We use XFS. Apparently Hans Reiser has, despite numerous cluebats from the community, decided that performance, not robustness, is his goal...
So back up, stomp the partition, reformat with XFS, and be on your merry way. (Or EXT3 if you like, if it's real serious robustness you want. EXT3 is, of course, based on the venerable EXT2, just with journalling, and I have NEVER had a problem with EXT3, dual processor, monster memory, slam the hell out of it. And is understood by most rescue disks.)
The cool thing about this little problem is that, yeah, it's a heinous filesystem bug.... but we can choose what filesystem we want to run. We're not locked into NTFS. :)
|Date:||August 1st, 2004 03:25 am (UTC)|| |
I'll have to look at XFS. At home, of course, I have more options, but I must say I am disappointed at this limitation of ReiserFS. We have a system at work with thousands of directories which each potentially containing thousands of files. We conducted a test of restoring the data on that system using several different filesystems. UFS was the worst and took more than a day to restore So far, ReiserFS gives us at least 10 times the performance under such circumstances than any other filesystem we've tried, and we were able to restore the data in less than an hour. The lure of that performance is rather strong. At least, now, I know of this particular flaw.
Sounds like a good case for RAID 5 or better. If you want to give me a bit more on the requirements I'd be glad to try and provide more insight, just 'cause....
|Date:||August 1st, 2004 04:58 am (UTC)|| |
We already use some pretty sophisticated storage systems. Fault tolerant architectures are good for lessening the consequences of hardware failures, but if someone does an
rm -rf /someimportantdirectory
We still may have to look to backups.
How big is the array? What I've seen done on single- and dual-drive systems is to simply attach a fat disk on a firewire card.... but if it's bigger than 400GB this may not work.
|Date:||August 3rd, 2004 01:56 am (UTC)|| |
We use a variety of system, many in the terra-byte range. I have a theoretical understanding of what we use and some distant past experience with disk subsystems, but it's really other groups that design our storage systems. I spend my time writing code to automate various things, and am somewhat distant from it all, but since I was one of the first to buck the system and use Linux on my desktop and (worse yet) encourage other people to do so, I often get pulled into discussions about Linux. Besides, I've been in this business for about 25 years, so I get pulled into discussions of a lot of things.