How can I map drive identifiers to serial numbers? - ZFS Storage Appliance

For some reason, on our 7310, there is a faulted drive showing under zpool status, but not anywhere in the GUI. I would like to identify that drive (it's not being identified by lights) by its serial number. Is there a way from the command line to translate drive identifiers, i.e.,c2t5000CCA39CD31559d0 to their corresponding serial number

Hello Mauricev, I could imagine 2 approaches, you could run a Support Bundle and intercept the upload but download it to a local client for inspection. The Support Bundle contains some files which might give you the mapping between objects in the pool configuration and the name used by the OS, e.g. format. Have a look into the folder ZFS for ZFS related output like pool status and configuration. Some files in the Support Bundle are binary coded and not readable as plain text, but the information you need, should be in there. How much information and where to find it depends on the AK software version you run. Review Document ID 2021771.1 for information about current software releases. The other try I would make is, install the Workflow ZS Basic Configuration Check, as this might give you what you are looking for. See Document ID 2046539.1 for information how to install the Workflow and how to use it. Hope this information helpsRegardsPeter


Disable fallocate for vdbench files

Hi, Is there a way to disable fallocate for vdbench files.We have a specific use case to test with fallocate=false hence want to know if its possible with vdbench to disable fallocate in filesystem definition. Thanks,Karthick
Once you explain what 'fallocate=false' means I may be able to give you an answer. Henk.
fallocate pre allocates file size, fio has a way to disable pre-allocate space for Linux. Something similar with vdbench would be helpful for us. This below content is from fio documentation. Reference: fio(1): flexible I/O tester - Linux man page  fallocate=str Whether pre-allocation is performed when laying down files. Accepted values are:noneDo not pre-allocate space.posixPre-allocate via posix_fallocate(3).keepPre-allocate via fallocate(2) with FALLOC_FL_KEEP_SIZE set.0Backward-compatible alias for 'none'.1Backward-compatible alias for 'posix'.May not be available on all supported platforms. 'keep' is only available on Linux. If using ZFS on Solaris this must be set to 'none' because ZFS doesn't support it. Default: 'posix'.
This is not an option that exists, nor are there plans to create it.  Henk.
Thank you for the response. -Karthick

How to get machine identifier?

Hello Experts, I'm building a javaFX POS application. The business is a franchise with lots of branches ... to determine which sales belongs to which branch i used to link each user to a specific branch.So on user login i get his branch from DB and then store his sales under that branch. But now the client is telling me that cashiers might change frequently ... and one cashier might work in 2 branches at the same time (different shifts). So i think the only way to do this is to instead of determining the branch from the user ... i should determine it from the machine (PC) in that branch. Meaning that i'll define the machines in the branch in the database under each branch. But to do this i need a way to get the machine hardware information like (CPU or Motherboard serial number) So my first question is how can i get this information in java?And second are those values (CPU or Motherboard serial number) unique? I searched on google about this but i found either a third party API called OSHI that i couldn't get any information about how to use it ... or using some VB scripts whichi don't think is a good idea.  Thank you for your timeGado
Hi, this is not realy JavaFX specific, but anyway ... Since 1.6 you can receive the local MAC-adress realy simple: NetworkInterface network = NetworkInterface.getByInetAddress(ip);byte[] mac = network.getHardwareAddress(); If you need low level OS information it depends on the platform you are using for your clients:You can read from "Runtime.exec()" or read files such as /proc/* in Linux Otherwise you can go hardcore and program your own JNI Routine to get the necessary Infos. If you use OSHI (which is a Java native implementation have a look at the test class. There are most of the major methods used:  Regards Bo
I suggest to generate a UUID for the machine and store it locally, either in a plain config properties file (your app likely already has one, most apps do) or in Preferences .  If using Preferences, store the setting under the system root.  When your app starts up, read the config to see if the system UUID is already defined, if not generate one and save it, and then send the UUID to the server when a communication session with the server is initiated, so that the server knows which machine the session is associated with.  The UUID can serve as the primary key for your machine table on the server if you need to persist data related to the machine.

O_ASYNC flag in Solaris

I'm migrating code from Linux to Solaris 10 and I'm having some trouble with fcntl() function. In the original code I've an attribute in fcntl() func - O_ASYNC. When I compile the code in Solaris, I get an error that O_ASYNC is undeclared identifier. And indeed, it's not defined in fcntl.h.
What's the parallel mechanism in Solaris? With what do I replace the O_ASYNC attrib.?
Questions about Solaris are best asked in a Solaris forum:
But in this case, you can find the answer from the fcntl man page:
% man fnctl The man page lists many attributes, and refers you to <fcntl.h>, which in turn includes <sys/fcntl.h>.
I don't find any ASYNC options, so you will have to review the ones that are available to see which best fits yor needs. 
That's basically just duplicating poll() functionality, with the benefit that the active file descriptor is identified in the siginfo_t structure. That functionality is provided in Solaris by using /dev/poll. See
man poll.7dSuch functionality, where active file descriptors are identified directly, is needed to allow applications such as web servers and databases to scale to thousands and thousands of connections. Using the historical poll() interface, such an application would have to iterate through an array of perhaps 10,000 or more pollfd structures trying to find the active file descriptor(s) each and every time the poll() call detected an active file descriptor.
I know the /dev/poll device in Solaris was developed for just that purpose, and I strongly suspect the O_ASYNC/SIGIO functionality in Linux was created at least in part for that same reason - the scalability of applications handling a lot of connections.
As for your specific problem, if you're not using the information provided by the siginfo_t struction in your current Linux SIGIO handler, I think the simplest method to port your current functionality to Solaris would be to open /dev/poll, create a separate thread that performs the proper devpoll ioctl() on the open /dev/poll file descriptor, and self-signal when an active file descriptor is found, somthing like
kill( getpid(), SIGIO )where you'd replace SIGIO with your seleted Solaris signal. Then, where you'd open() (or otherwise obtain) and fcntl() your file descriptors in your Linux code, you'd just perform the approprate ioctls() in your /dev/poll file descriptor to add them to the poll set being used inside the kernel. Just be sure you also delete your file descriptors from that poll set right before you close them.
If you're actually using the siginfo_t data in your current Linux SIGIO handler, your problem is probably harder, because offhand I can't recall any way for a userland process in Solaris to specify the contents of any signal's siginfo_t structure. If you can find a way to add an equivalent of the Linux SIGIO's siginfo_t to your selected Solaris signal, your problem is easy to solve - just create the /dev/poll polling thread like above and add the siginfo_t data. if you can't come up with a way to add the siginfo_t data with the active file descriptor, your problem could be a lot harder to solve depending on the design of your application. 
Thank you for the detailed answer!
I'll try right away.

SDcard for FRDM-K64F?

Hi Have followed all the instructions, however the response from the serial port is: No disk, or could not put SD card in to SPI idle stateDidn't get a response from the diskSet 512-byte block timed outconfigdb_load_to_db error: Unable to open file, trying to recover from the temporal oneconfigdb_load_to_db error: temporal settings file is unavailableERROR: Unable to read configuration file(s).  Check that ini file(s) exists in your application directory. Have used the MBED sample code for the board: FTF2014_workshop - | mbed and the SD card reads and writes ok, so the card slot works.I have both 2gb & 4gb SDcards, they seem to work with other boards.Any special formatting or setup needed? I've not been able to get the virtual serial port working either..... thanks for any suggestionsgb
Hi! Have you put contents of sd_card directory to your card? (So that "java" is put to the root of file system) Andrey
Hi Yes, I have the Java folder as the root, with just the .ini file inside that. It looks like SDcard access with Mbed type devices can be a bit tricky, probably just need to try some different cards?Mine are both Sandisk.... with thanksgb
Frankly I've myself observed such situation however after I've formatted the card one more it has vanished. I'll check with the dev team whether this is custom situation or not. Just in case, my card is Transend Andrey
I've seen a bit of flakyness with SDcards, but after removing and re-inserting the issue went away. Maybe you have a bad SDcard slot on the FRDM board?
And by the way, having some samples working does not necessarily mean that everything is fine on low level. I've seen a situation when the above message was printed but later on the card was accessed successfully. Apparently some OS APIs were able to proceed but not all of them

Java Cloud Service: File IO operations

Hi Team. I seriously need a quick and precise response to the challenges I'm having around java cloud service. My requirements is such that my java application needs to do a lot of reads and writes to a file ( in some directory. This directory shall grow into hundreds of Gigs overtime. Can I achieve this with the java cloud service implementation. The writeup I have seen so far speak about java cloud service -saas extension in which one could have access to some /customer/scratch directory. is this applicable to a plain java cloud service implementation? 
HI JCS-SX is different from JCS. So JCS-SX documentation is NOT applicable to JCS. On JCS,  I believe you can add/attach some disk space to an instance, but unsure of the limit in terms of size and number of disks one can attach.Have you considered the Storage Cloud Service, which is designed to accomodate large volume of data?Or a plain DataBase as a Service? Notice that in JCS, you can give yourself any permission you want, as opposed to JCS-SX.Simply be concerned with the disks that SHOULD remain read-only, and might change upon reboot. RegardsPatrick.
Hi again, from , see:ORACLE STORAGE CLOUD SERVICEDo not attach custom storage volumes to a service instance's VMs. Any custom storage volumes that you attach are detached if the service instance is restarted. If a service instance requires additional storage, add storage by scaling the service instance’s nodes as explained here. which points at… , that maps your concern (up to 1Tb of disk space per disk attached) Does this help?RegardsPatrick.
Thank you Patricka, Your guide is quiet helpful. I pretty much anticipates that Database as a service will meet my expectation but that will require some significant changes to the existing solution which uses file system to do my File IO operations (create file, append record to the file, read from the file etc). There are lots of information out on the net, some of which one would have spent a few hours on them before realising they are probably outdated. I am still trying to put the bits and pieces together notwithstanding. Let me explore the oracle storage cloud option and see if it will meet my requirements. All I am looking at achieving is to simply be able to specify a directory path in my config file for all the reads and writes operations. My current solution uses something of the form /home/locale/lucene as the directory path and i want to achieve something similar on the cloud. Am I able to specify a path in like manner that directly points to the provisioned oracle storage cloud? The fact that I can administer and manage the size using the Scale up/down option is a good one.
By way of further updates. I specified the read/write file IO directory as a subdirectory of the current (.) user directory in my properties file and the app works perfectly. creation, read and updates work as expected. By all means, I've not yet figure out the full path to the file it is writing to but I want to assume it will be somewhere in the instance domain directory path structure. All I did was to specify my property value as (./lucene/NG/master) in my properties file. By this I am able to achieve one important concern bothering my mind earlier; that I can do pure java File IO operations on files in the cloud  afterall. Now my question is to know if there is a way to browse the files under the domain server instance to monitor them for growth?:
Hello, There is some weblogic.xml settings to be able to browse files. index-directory-enabled element controls whether or not to automatically generate an HTML directory listing if no suitable index file is found.The default value is false (does not generate a directory). Values are true or false.index-directory-sort-byThe index-directory-sort-by element defines the order in which the directory listing generated by weblogic.servlet.FileServlet is sorted. Valid sort-by values are NAME, LAST_MODIFIED, and SIZE. The default sort-by value is NAME.[...]resource-reload-check-secs[...] Associated with a virtual directory, this might meet your expectations: maps a URL to a particular directory Hope this helps,RegardsPatrick.