app domains - Java SE (Archived)

One of the few concrete things which .Net has which Java does not is the concept and implementation of AppDomains where multiple applications can run in separate compartments inside a single VM process. I think this would go a long way to address problems of VM startup time and memory footprint. I have no idea how robust the .Net implementation is in the face of application failures but I sure like the idea in theory.

I'll go with this. I think it's along a simialr line to the Isolates API.
The ability to stop and start individual domains is a good thing, and should be in Mustang

How is this different from classloader-separation and AppContext (in case of Swing) separation?

There a few awkward cases which can't be handled properly with class loaders. Notably some global state (e.g. System.out, the security manager, URL stream handler factories, etc) can't easily be separated to give each application its own copy within a single JVM. Anything which uses JNI adds another can of worms in this context.


extending the debate : mutliple JVM's interactions

after reading and answering atbtpvc's thread about unique application instances over multiple JVM's, I was wondering why Sun hadn't implemented such a feature (by managing a shared memory space for JVM for this kind of needs)
is it impossible ?
has nobody asked for it ?
does it break any rule I wouldn't know of ? 
Never thought of it... ignorance is bliss
after reading and answering atbtpvc's thread about
unique application instances over multiple JVM's, I
was wondering why Sun hadn't implemented such a
feature (by managing a shared memory space for JVM
for this kind of needs)Sun's interperation of this type of feature is in J2EE, it implementation is called Sun ONE Application Server.
is it impossible ?Distributed objects are available in many different flavours. Their main concern is multiple JVMs over multiple machines. If you have multiple JVMs on the same machine you can always put them into one JVM (The application has support for running mutliple applications in one JVM)
In either case shared memory is not approriate.
Java NIO supports shared memory.
has nobody asked for it ?They probibly have, but I see no compelling reason to support it.
does it break any rule I wouldn't know of ?Not that I can think of. 
thanks a lot for this quick and accurate answer ! :)
the solution would then consist of using a kind of singleton pattern and launch everything within one JVM ? 
You can lauch everything in one JVM. Application server use a different ClassLoader for each application allowing them to unload the application. This is likely to be more complicated than necassary.
If you start each application in its own Thread, or even its own ThreadGroup they can co-exist fairly will.
A Singleton pattern can then be used to register and find named shared objects.
This model can be further extended with JNDI and RMI. Hoever is performance and simplistic is a concern, just use the Singleton.

IPC between 2 Java VM processes

I'm looking for a complete of possible ways, how two Java VM's i.e. the classes there can communicate to each other, when they are running on the same machine.
I currently know of JNI and Sockets.
I ignore Corba because it is not thought for ipc communication and therefore i also ignored RMI. Sockets isn't neither but is a easy to implement.
Anymore ideas ?
Thanks and regards,
Stephan Gloor
if your classes are build for it, you should be able to execute them in a single jvm and greatly benefit from that. at least theoretically. of course there are separation reasons like security and so that are coming from the fact that two OS processes are isolated from each other and have dedicated properties. however in my eyes the goal should be to execute them in one vm. may i ask why you need to run them in separate vms ?
other ways of ipc would be files, SYS-V ipc (sems,message queue,shared mem) via JNI native add-ins (yukk), signals (yuck again) and pipes.
pipes are properly the nicest way of same host IPC on unix, but i wouldn't know if its a good idea with java.
And Unix domain sockets. 
I think the best way to comunicate two JMV is with RMI. Try with this: 
I'm looking for a complete of possible ways, how two
Java VM's i.e. the classes there can communicate to
each other, when they are running on the same
machine.I would go with RMI, for the following reasons:
1) The communication mechanism is aleady defined. If you do something like open a pipe between processes, you need to marshall/unmarshall the data yourself.
2) It's about as efficient as you'll find. Shared memory will be faster, pipes may be faster, but sockets are essentially the same thing.
3) It allows you to move your JVMs to separate machines at such time as you deem that necessary. Shared memory won't, pipes won't unless you replace them with sockets.
To rephrase and reiterate what another poster has said: exactly what are you trying to accomplish by separating JVMs, and could that be accomplished more easily with multiple threads in the same JVM? 
ooops, i forgot the unix domain sockets.
i guess RMI sounds attractive, since RMI will make the transition to a more distributed type of application easier. So with that in mind I guess I would use RMI. Sockets are not a bad option either, as you might use those together with objectstreams.
however, the itching question remains: why two jvms on one computer ? there might be reasons for that, but whatever that reason may be, i guess it would be beneficial to try to eliminate these reasons and allow the applications to share a jvm for a lot of reaons. its a pity that a lot of programmers use the static keyword in such a way, that their classes cannot really be shared by two applications in the same jvm (not even to mention System.exit which would be a NO in such a shared jvm context).
regards, stepan 
Thanks a lot for your answer. I did found that sockets are a very efficient way and relatively easy to implement.
The problem is simple: We have servlets running in a vm and a gateway running in a separate vm. Until now, these were totally separate tasks, but now, the should communicate together. Probably, they should once be running on separate machines, right now, they're not already.
Thanks and regards,
Stephan Gloor
to stepanroots:
Firstly you can use a SecurityManager to prevent code from executing system calls like System.exit and other 'bad' calls.
You can share data between 2 seperate java 'processes' (2 programs started in the same JVM). It's just not obvious.
Create a class loader that loads a 'process' in it's own class space.
To pass data between two 'processes' just use all shared classes like Strings and HashTables. If you want to share data between the two classes you have to copy the data from one class space to the other.
Like if you create a new class Foo, and load it in both class spaces, then you can't share between the two. However the class loader that loaded the Foo can. Also you can do it with reflection, but that's messy. The easiest way is to use the java system classes to share data. Since you can't rewrite your class loader to load them.
It's a pain but you can do it if you must. 
would that not amount to basic file IPC (or whatever kind of stream the classloader is using) ? i do not understand really what you mean to be honest.
regards, stepan
would that not amount to basic file IPC (or whatever
kind of stream the classloader is using) ? i do not
understand really what you mean to be honest.
regards, stepanIt's possiable to load the same class in a VM twice, so that you have 2 seperate instances of the class object. When creating instances of these types they are not equal. Meaning that you can't set references equal to each other. Which gets really weird when you want to be able to use the in code, because suddendly
Object myFromOtherLoader = getOtherReference();
MyClass my = (MyClass)myFromOtherLoader;throws class cast exceptions because the current execution envoriment tried to cast it to a MyClass type from your current loader when the type is MyClass from the other loader. Won't work. Reflection allows you to get access to and use methods from the other loader but it's the only way.
This would work for isolating 2 seperate execution enviroments in SOME cases. As you could share some classes within the VM it would work. However there are some instances where this falls down. shared libraries for example. The way that JNI works is that basically you load the dll or so file when the class that requires it is loaded. However, you cannot load the same dll or so twice into a process.
So the second time it tries to get loaded it fails. The whole vm gets messed up. There's no way for a class loader to detect this before loading a class. (Well, I guess it could preload the class, detect any native method signatures and not load it in that case) Even if you detected the native signatures, you would have to share the class instance among all your java processes. Which might not be what you want to do. Basically all native classes must be shared throughout a VM. No way around it.
ok now i get your idea. you mean to implement an IPC-alike mechanism in the way that you get two separate execution environments by the scopes introduced by classloaders and that a classes identity is infact defined by the class itself and the classloader that loaded it. so instead of utilizing processes as separating containers, you suggested classloaders as a way to come up with separate environments. now i get it, thanks for explaining. however one has to consider that the two approaches are fundamentally different, both has its pros and cons i guess.
regards, stepan 
I'm also interested in this, as I'm writing a JIT compilation profiler, which uses JVMPI. I need to communicate between an agent library running in the same JVM process as the profiled app, and a front end controller app in a separate JVM process (for performance). Can I use RMI with JNI (methods declared native in controller that are implemented in profiler agent), or is that just not possible? 
to stephanroots:
exactly. I don't think it's the best idea either. Actually I don't think that there is a way to run 2 processes in Java without heavy modification of the APIs and the JVM. It was designed in such a way that all threads must be 'trusted' by each other. Or at least that shutting down the VM is a possibility. One reason that Applets never took off was because they never seperated them much.
For example, create one applet that creates a thread that never ends ( make it sleep or something forever). Declare a static variable in it that is inited before makeing the thread that never ends. Now create another applet that access that static variable but doesn't init it.
Depending on the order that you execute the applets, you will get different results. Which is just strange as if you load the applets from different pages they are supposed to be in their own 'contexts'.
Though the new sun Jvm may have fixed this problem. I know this is a problem with the MS JVM.
IPC is hard, and java doesn't solve any problems for you, it just creates new ones. ;) 
steve: i agree with you that the lack of separation between java applications in the same jvm might be a problem. i do not know enough about the subject to make any solid comments, however the unix approach of total isolation of processes except for explicit IPC mechanisms seems to allow secure environments, where single moduls (aka processes) can be designed and implemented independently. the common grounds for communication is then added by making some convention about the IPC mechanisms that are in use.
for communicating between java applications i would chose an abstraction that uses streams however, these streams then might be memory mapped to account for shared memory scenarios. alternatively the streams might be originating from sockets, to allow distributed computing.
regards, stepan

General Tasks for Multi-Threading and Distributed Computing

I have not been able to find any information on this subject in the FAQs or online:
Suppose my application is inherently parallelizable, and I wish to write code that implements it, for instance, using Doug Lea's concurrent library to set up a dependency graph of FJTasks (or some other "task" class that essentially wraps a Runnable).
I don't care how the individual tasks get accomplished from one execution of the application to the next. One day I may have an 8-processor machine, in which case I would want a thread pool of size 8 to set up 8 threads that would take turns running the tasks in "order" (order defined by the topological sort of the task dependency graph). The next day I might have a Beowulf cluster, in which case the tasks might all be executed on different nodes using MPI. The next day I might have a list of computers on the internet that have been set up to allow RMI, in which case I want the tasks to execute by remotely invoking methods on the computers. Maybe the next day I have the same computers, but they have all been upgraded with dual processors, and so I want to invoke 2 methods at a time on each one, to take advantage of the extra processor (i.e. have a thread pool of size 2 on each computer)
In each case, the fundamental application being executed hasn't changed; it is still just a set of tasks, some of which can be executed in parallel. Whether the parallelism comes from threads or MPI or RMI makes no difference to the application (assuming each task takes long enough that the network overhead for RMI doesn't bottleneck it). However, I have never seen any implementation of a class that attempts to tie these together into a single "ParallelTask" class. Is this just because the problem is too difficult? Is it because the various methods of parallelizing code are that fundamentally different from one another?
This type of problem is precisely what an open-source project of mine was developed to handle!
For lack of a better term, I have dubbed it Transparent Distributed Computing. I am looking for developers to try it out, and to get their feedback. It is hosted on Sun's website, and can be found at the following URL:
Please give it a look.
Best wishes,
You also may want to look into JavaSpaces which is related to Jini.

Single JVM instance

Is there any way to control number of JVM instances running on system. I would like some of my applikations to run in single JVM, or to have only one JVM on system which would run all of java applications?
It would be fairly easy to create a java program that can reflectively execute other java programs on the same VM. Perhaps some type of "drag your jar here" deal. Mac OS X actually does this for you. The java VM is always running. When you start up a java program it invokes it on this VM. The start up times are faster, but there are some other problems. The jury is still out on whether or not this will become more widespread. 
I am currently trying out running programs within a single JVMs... There are many things you have to take care of�especially when using swing applications...
Example: If you use static objects in some classes (or you use classes with static variables from the JDK) you have to use different classloaders for each program run within your JVM. This is nessessary as each program needs to have it's very own static object, but this makes the memory reduction worthless as the loaded classes are the main reason for memory consumption. (Some kind of Shareable interface, implemented by all classes not using static members would help in my opinion. But it is Sun's turn to introduce that in the JDK.)
2nd Example: You have to be very careful to really stop a program within the JVM when it is shut down. This means to interrupt all threads (which are not part of one threadgroup, as f.e. swing and rmi use their own threadgroups), remove all Timer-Events from existing SwingTimers (or java.util.Timers), disposing all windows, and so on...
So it is very hard to really produce a stable application when using only one JVM... 
I'd say that if you are running multiple AWT/Swing applications at the same time, that it is nigh impossiable. This is because the AWT is single threaded. If any one application decides to use the entire thread for it's processing time, then all applications will be frozen. There is no way to stop this becuase of the way that java links to library files. We will have to wait until Sun decided to let us multi thread the event queue, then it will be possiable.
For most applications which do not get frozen for more than about 1 or 2 seconds it would work, because you have to switch to the other program and thereby the time of the freeze has gone.
And you can use different classloaders for loading the awt-classes for each program. Thereby you should have multiple AWT-Event-Dispatcher-Threads as they seem to be some kind of singleton (static member).
But anyway, it takes very much effort to get stable with more programs within one JVM. I tried it once and after spending a lot of time I decided not to share...
I think it's Sun's turn to get a solution herefore...

Async Operations in J2EE

I am trying to figure out the BEST way to perform parallel operations within a J2EE container (I have several long running operations that can be done in parallel) in a J2EE EJB method whose results will be merged and returned to the client. Is there a standard way to do this? I can easily spawn some threads, but that is highly frowned upon in the J2EE environment.
I know JMS/MDB is a good way to kick things off in parallel, but I would need to wait for the results to return to the client.
Ideas?? I don�t have any good ones.
I too have heard about the "frowned upon" thread spawning, but have never agreed with it. I think especially if you control how many threads you spawn (like, not several for EACH instance of the EJB) and correctly terminate them, it should be fine. But I'd be interested to hear what others may say. 
... though unfortunately the forum is not heavily used at this time of day/night, so it'll probably end up being ignored by tomorrow. 
The primary concern with threads in a EJB environment is the fact that it is the container that takes responsibility for the operation of the runtime environment. If you are going to make use of threads, then the container will have problems managing issues such as concurrency. Furthuermore, the container has no means of freeing the resources of the Thread during the passivate operation.
While you may argue that using the ejbActivate and ejbPassivate methods you can actually control the way of handling parallel operations, there should be a better way of accomplishing things.
As far as I can say, try to stick to the specs. Unless you are willing to accept the risks involved with using threads. If you acn provide an explanation of what you are trying to achieve, then we can say how you can go about achieving what you want.
As a footnote, even workflows which are one of the main reasons I see the necessity for parallel operations, using a messaging architecture to perform the parallel operations. 
using a messaging architecture to perform
the parallel operations.This works really well in EJB-space.
Note that you can use threading operations in Servlet/JSP operations (just have to be very careful), so the combination of the two is quite operable. The biggest caution for going this route is that if the threaded part of the application is under heavy load, you could spin off more threads than the server and/or JVM can handle and crash miserably. 
See, here's the thing about why I don't agree that threads should be "frowned" upon in J2EE:
1) Using them haven't caused us any problems / leaks / whatever
2) An EJB developer may use a 3rd-party library which happens to use threads (and is unaware of that fact, as the API documentation wouldn't mention that) -- and nothing prevents him from doing so. If it's supposed to be banned, then it should be enforced. If he discovers the library uses threads, what's he supposed to do? Get the source and rewrite it? Try to force the author to supply a different "special" version for J2EE usage? Yuck.
3) I think maybe it's just a mis-conception that threads are to be banned, based on the fact (at least it was once a fact) that EJB methods can't include the word "synchronized" in their declarations -- but that would just make sense, as the container is supposed to guarantee that only one thread at a time access the EJB instance.
I've never heard any compelling arguments of why they're "bad" other than the usual "well, the spec says so" -- but it doesn't really seem to say that. Maybe it really just means "don't create rogue threads", but creating a finite, determined number of them when well-written, should be ok. 
Good article. I think this is the approach (w/ MDBs) I am going to take.