[GRLUG] PHP fault tolerant web application

Phillip_L._Hebenstreit at hud.gov Phillip_L._Hebenstreit at hud.gov
Thu Aug 3 14:04:29 EDT 2006


Hi Zach,

Very good tips, they help reinforce some information I was coming up with.
I'll have to read your links tonight.

The database will only store the following
      contacts
      contact information
      user information
      property addresses
      notes
      dates & times of all information
      misc other chucks of data

the processing will mostly occur in the PHP application because there will
be a lot of data validation as users type in data/information and PHP will
generate information reports based on the data.



Everyone:
      What do you think about this, Since the PHP will run on the master
server, I wonder if I'd need some sort of JavaApplet that runs on the
client PC and monitors the server status, if the server is down, the
JavaApplet would redirect the clients browser to request future PHP pages
from the backup server..

this idea almost leads me to wonder if the whole client side application
shouldn't be done in Java or could I get away with some small javaApplet
that would just redirect the browser?  (I'm really bad at java though )





Phillip_L._Hebenstreit at hud.gov wrote:
>
> Hi everyone,
>
> Sorry, I'm awful at describing what I mean in an email, let me try again.
>
> Could anyone give me tips on developing a PHP Web Application using
> Apache and MySQL. The PHP web application needs to be fault tolerant and
> my leanings are towards using Replication across 2 remotely separated
> Linux servers. So, the PHP application, the Apache httpd and the MySQL
> all need to be replicated and if a failure occurs the PHP application
> needs to respond to that event as smoothly as the MySQL and Apache do
> (hopefully).
>
>
> I'm having trouble finding specifics on how to do this instead of a lot
of
> theory and debate.

Ok, here's a little primer. The web application is usually tied to your
web server. They go hand in hand, like eggs in a basket.

The database server is usually on a separate physical machine, but not
always (although good design says to keep it separate from your
webapp/web servers).

Load balancing across your web servers usually occurs from another
device. Sometimes this is just another server with apache on it,
allowing apache to load balance, and other times it is an actual device
designed to load balance.

So so far you've got three tiers: the load balancer, the web/app level
and the database level. Usually these are physically separated from one
another, although they don't have to be. Logically they are always
separated from one another.

Tier 1's (load balancing) fault tolerance isn't in the scope of this
email, but these devices usually come or are setup in pairs.

Tier 2 (web/app) is very scalable horizontally. You can continue to add
more and more webapp servers to support servers load.

Tier 3 (db) is not very scalable horizontally, and to really scale up
costs some big bucks.

To start with a simple scenario with MySQL you could have two database
servers replicating (not clustering... just replicating). So there is a
a simple Master/Slave configuration. How this works. In MySQL 4.x to n
5.0.x statement base replication occurs. That is, you send a
insert/update/delete/etc SQL statement to the master, and that same
statement will get sent to your slave and processed. The replication
doesn't work in the reverse way and you don't want it to.

Now the database doesn't know about your application, so your
application has to know about the database(s). If you want to provide
simple fault tolerance you setup a configuration for your webapp which
points to the master, and you ignore the slave. If the master fails,
then you could have a monitoring script (or you could do it manually)
update the configuration for the webapp and point it to the slave. When
the master comes back online you would change the configuration(s)
around so it is now the new slave, and the old slave becomes the master.

If you are looking for better performance there is the option of having
your webapp only send insert/update/delete/etc. SQL statements to the
master, and then do a round-robin approach of having it send all select
SQL statements to the slave. Since replication is statement-based if a
insert/update/delete/etc. statement is cpu or io intensive on the
master, it will also be cpu or io intensive on the slave. So having one
solely do writes and another soley do reads isn't going to get you that
much in a performance boost. This is not true if there is alot of work
being down on your database server outside of MySQL in order to generate
your SQL statements.

One nice thing about this approach is that if you store sessions in your
database users who visit your web site can be load balanced to any
web/app server and they can continue browsing your site not knowing a
thing about what's going on.

This is a very simple setup. The only thing I would caution because I
have hit this problem is to make sure your SQL statements are running as
efficiently as possible. When you get any # of users trafficking your
site and one of the SQL statements is really slow that will slow down
all the other statements. They say one bad apple can ruin the bunch,
well that is the same with SQL. One bad query can make your site appear
non responsive to any number of users whether it's 2 or 300.

We are setting up testing replication at our office using round-robin
like select's to come from the master/slave. Currently we replicate but
we ignore the slave server unless something bad happens to the master.

The other way to get high availability is to actually use MySQL's
clustering technology. I believe this requires a minimum of three
servers. One to play the NBD Management node, and then two to perform
the clustering. I don't think the Management node has to be on a machine
by itself so you could possibly get away with doing a cluster with two
machines.

Cluster's are limited because you can only cluster tables with the
Cluster engine (no MyISAM and no InnoDb), but for the higher
availability and better fault tolerance this is the way to go, because
each cluster contains certain partitions of your database and it makes
selects super fast. Cluster's don't have to operate by themselves
either. You can have other MySQL database servers which interface to
your webapp, and then use those to communicate to your clusters.
Clusters can also replicate to other backup clusters, but I don't
believe this is available until MySQL 5.1.6.

I would first recommend that you setup two databases that are
replicated. Then practice pretending the master dies or the slaves dies,
and learn how-to bring it up to par.

Do you know the amount of data you are expecting to house or the number
of concurrent users you are expecting to hit? Some of our selects got
really slow when we had queries that would select a few hundred thousand
rows. Up to that point it worked great, but then it came to a screeching
 halt anytime anyone ran that query.

Put as much processing on your webapp as possible. Usually the webapp
layer is underutilized and the database is over utilized. Do not put
sorting on the webapp server on queries that use paging techniques. This
is because in order to accurately use paging techniques you have to let
the database do the sorting and handle the limit and offsets of a search.

Be sure to read through
http://dev.mysql.com/doc/refman/5.0/en/replication.html

In the upcoming future MySQL 5.1 will introduce:
 - row based replication (which doesn't bring the same cpu/io intensity
to the slave server that statement-based does now)
 - mixed replication which is a mixture of row-based replication (RBR)
and statement-based replication (SBR)
 - MyISAM tables get row locking and transaction support
 - partitioning of individual tables across a filesystem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://shinobu.grlug.org/pipermail/grlug/attachments/20060803/e750ab4e/attachment.htm 


More information about the grlug mailing list