[NTLUG:Discuss] Distributed processing
Chris Cox
cjcox at acm.org
Tue Jul 17 22:31:01 CDT 2001
This sounds like a typical distributed transaction processing
scenario. Nowadays, you would use a Web Application Server...
like Weblogic (maybe WebSphere). I am somewhat familiar with
what Weblogic can do and it supports the idea of application
server clustering with load balancing and fail over. I
believe it will do everything you're wanting it to do... of
course, you'll need to use Java to use Weblogic effectively.
(and Weblogic carries a LARGE price tag... the cost of
scaling big).
Not sure if WebSphere or Tomcat or others do the clustering
effectively with load balancing and fail over... if so, they
may be a more cost effective solution on the lower end.
Regards,
Chris
Greg Edwards wrote:
>
> I sent this out last week and was wondering why I never saw it posted.
> Well I guess I sent it to admin at ntlug.org, sorry:)
>
> -------- Original Message --------
> Subject: Distributed processing
> Date: Sun, 08 Jul 2001 17:54:36 -0500
> From: Greg Edwards <greg at nas-inet.com>
> Organization: New Age Software, Inc.
> To: ntlug Admin <discuss-admin at ntlug.org>
>
> I've been searching for tools that will do distibuted processing at the
> function level and haven't had much luck. There are plenty of
> distributed load managers and parallel processing managers (such as
> Beowolf). The load managers work at the program level and parallel
> process managers at the calculation level. Neither of these solutions
> answer my needs and multi-threaded has too many drawbacks for a solution
> here. What I need is a distribution manager that will pass the load at
> the procedural level.
>
> What I'm trying to do is run an application farm for interactive web
> applications. This solution would be usable well beyond web
> applications. The idea is that during the processing of an application
> rather than a single program using a set of libraries on a single box an
> API would allow the function request to be distributed among N boxes
> that support that function. Not every box would support every function
> available throughout the farm but every box would have knowledge of
> every function in the farm.
>
> For example, say the application needs to search a database for all
> employees that have 20 years of service and then return that list sorted
> by age of the employee. The entry application would reach a point of
> needing the data and call the API which in turn would determine the best
> machine in the farm to process the request based on current load and
> data availability. The request would then be passed to that machine and
> the entry application would go on about its business until the results
> were returned. During the processing of that request the search for
> employees and the sort may be split among mutiple machines as well.
>
> I want to eliminate the issues of connection counts, task counts, user
> counts, etc. that a high count of concurrent users can cause. This will
> also maximize process hueristics such as cache usage, repeated dataset
> processing, heavy math processing, graphic generators, database access,
> etc.
>
> The basic topology of the farm would be a web server that handles the
> web connections and static pages. The farm would handle the processing
> and pass dynamic pages back to the web server for delivery as static
> pages. The web server would determine which entry point in the farm to
> send the initial request.
>
> I hope this makes since? Has anyone seen anything along these lines in
> the Linux world? My target language (initially) is C for performance
> reasons.
>
> --
> Greg Edwards
> New Age Software, Inc.
> http://www.nas-inet.com
> _______________________________________________
> http://www.ntlug.org/mailman/listinfo/discuss
More information about the Discuss
mailing list