Linux on the Mainframe

My cur­rent job involves run­ning a large JBoss pow­ered health­care appli­ca­tion on 2 IBM z10 Main­frames.

The Main­frame world is some­what new to me and adjust­ing to a whole new vocab­u­lary has been a chal­lenge. Both our ELS boxes will be run­ning Red­Hat and besides JBoss we will also have a cou­ple of hefty Ora­cle instances running.

I am also excited about the 2 XIV stor­age boxes that we have pur­chased for this project — it will be fun to tor­ture them to see how they hold up.

I’ll keep my blog updated once the fun (read: tor­ture of hard­ware and soft­ware) begins.


The art of benchmarking and performance evaluations

Over the last cou­ple of years I have done a lot of bench­mark­ing and per­for­mance eval­u­a­tion work. When a mem­ber of my team comes up with a good idea or drags some new tech­nol­ogy into the office the first remark usu­ally is  ” How does it per­form and will it scale? ” Today I will be


MySQL 5.5 is now GA!

This is going to be a goooood day. MySQL 5.5 just went GA and I have time to play with it this morn­ing … Read the offi­cial release announce­ment here. I last played with MySQL 5.5 a cou­ple of month ago. When replac­ing the bina­ries only, eg. no change in the con­fig­u­ra­tion  file I have


Large scale disk-to-disk backups using Bacula, Part VI

This is going to be my last post in the series. There are a few loose ends to tighten up and some more ques­tions to answer. I’ll also explain some of the miss­ing pieces to our puz­zle. Our Bac­ula deploy­ment is actu­ally really sim­ple.  We are only using the most basic fea­tures that Bac­ula has


Large scale disk-to-disk backups using Bacula, Part V

The inte­gra­tion of Bac­ula into the rest of our com­pany has been a very straight­for­ward process.  Amongst other things we have an inter­nal con­trol panel that is used to con­duct most busi­ness oper­a­tions like cre­at­ing and chang­ing cus­tomer sub­scrip­tions  plus a billing com­po­nent which con­tin­u­ously mon­i­tors resource con­sump­tion and informs our cus­tomer data­bases about changes


Why ZFS rocks for databases …

My series of posts regard­ing Bac­ula has resulted in a num­ber of ques­tions about why we have large MySQL data­bases on ZFS. This post will give you a birds-eye view on  exactly why ZFS is so cool for data­base deploy­ments. If you do not know what ZFS is you should read this. Data Integrity Data


Large scale disk-to-disk backups using Bacula, Part IV

This post will pro­vide more insight into our cur­rent Bac­ula con­fig­u­ra­tion and the under­ly­ing method­olo­gies. Our empha­sis is to keep the time we spend on con­fig­u­ra­tion to an absolute min­i­mum while still main­tain­ing a high degree of flex­i­bil­ity. We have come up with a way of sched­ul­ing that is extremely sim­ple, yet flex­i­ble. Our dirty


Large scale disk-to-disk backups using Bacula, Part III

My last two posts have sparked a lot of inter­est — my mail­box has been over­flow­ing with lots and lots of ques­tions from all over the world. This post will pro­vide more details about our Bac­ula infra­struc­ture while the next post will dis­cuss our Bac­ula con­fig­u­ra­tion. Let me give the impa­tient read­ers a sum­mary :


Large scale disk-to-disk backups using Bacula, Part II

As men­tioned ear­lier, we recently com­pleted a fairly large Bac­ula deploy­ment. A num­ber of peo­ple have asked me why we did chose Bac­ula over the more … estab­lished names in the backup busi­ness. To be com­pletely hon­est it was almost coin­ci­den­tal. We nego­ti­ated with both IBM, Syman­tec and Com­m­Vault and they all deliver solid data


Large scale disk-to-disk backups using Bacula

Over the past year I have been deeply involved in the nitty gritty details of choos­ing, design­ing, build­ing, deploy­ing and man­ag­ing a new backup infra­struc­tur at work. It has been a very edu­ca­tional expe­ri­ence. Our old backup plat­form con­sisted of var­i­ous dif­fer­ent tools and tech­nolo­gies  and the result­ing spaghetti  bowl got more and more dif­fi­cult