Feed aggregator

Java reflection: Get classes and packages names from a root package within a custom URLClassloader

Yann Neuhaus - Thu, 2016-08-18 08:21

I have got a case for which I needed to use a custom Classloader, in order to find the list of classes and packages available across several software plugins materialized by a set of jar files. These jar files are intended to be located inside a particular place.

After googling, some solutions invited us to deploy external libraries. But I was not interested to manage another lib in my soft just for that purpose.
So, the solution was to recover all jars from the specific Classloader and loop on them in order to find out the classes and packages I was looking for.

This blog post is an extension of a previous one, that only looked for classes within the JVM context Classloader:

http://blog.dbi-services.com/java-get-class-names-from-package-string-in-classpath/

 

This example now, shows how to initiate a new child Classloader, to work with, and extract packages names in addition to Classes list available inside a root package name.
Following, you will see a complete Java class resolving this issue:

 

import java.io.File;
import java.io.FileFilter;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.URLClassLoader;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.TreeMap;
import java.util.TreeSet;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;

/**
 * @author Philippe Schweitzer, dbi services Switzerland
 *
 * It is a Class that demonstrates how to list Classes and Packages from a given
 * root package name.
 *
 * This class uses a specific Classloader in order to lookup into jar files
 * located at a dedicated place (and not in all JavaSE library ).
 *
 */
public class RessourceFinderExample {

    /* Root package name */
    private static final String packageName = "YOUR PACKAGE NAME";

    /* Paths list to jar files */
    private static final String[] classPaths = {"plugins", ".", "dist"};

    /**
     * Main method, it creates the Classloader and displays a consolidated
     * result.
     *
     * @param args no argument required
     */
    public static void main(String args[]) {

        URLClassLoader cl = new URLClassLoader(findJarURLsInClasspath(), Thread.currentThread().getContextClassLoader());

        System.out.println("");

        TreeSet<String> packages = RessourceFinderExample.getPackageFromPackage(packageName, cl);

        System.out.println("");
        System.out.println("-- CONSOLIDATED  RESULT --");
        System.out.println("");

        for (String packageString : packages) {
            System.out.println("   PACKAGE : " + packageString);
        }

        System.out.println("");
        System.out.println("--------------------------");
        System.out.println("");

        TreeMap<String, Class> classes = RessourceFinderExample.getClassesFromPackage(packageName, cl);

        System.out.println("");
        System.out.println("-- CONSOLIDATED  RESULT --");
        System.out.println("");

        for (Class c : classes.values()) {
            System.out.println("   CLASS   : " + packageName + "." + c.getCanonicalName());
        }

        System.out.println("");
        System.out.println("--------------------------");
        System.out.println("");
    }

    /**
     * Method that finds all jar files available in given dedicated classpath
     * places. It serves for an URLClassloader initialization.
     *
     * @return List of jar files URLs
     */
    private static URL[] findJarURLsInClasspath() {
        URL url;

        ArrayList<URL> jarURLs = new ArrayList();

        for (String path : classPaths) {

            File[] jars = new File(path).listFiles(new FileFilter() {
                public boolean accept(File pathname) {

                    return pathname.getName().toLowerCase().endsWith(".jar");
                }
            });

            if (jars != null) {
                for (int i = 0; i < jars.length; i++) {
                    try {
                        System.out.println("JAR Path: " + jars[i].getAbsolutePath());
                        url = jars[i].toURI().toURL();

                        jarURLs.add(url);

                    } catch (Exception e) {

                    }
                }
            }
        }

        URL[] urls = jarURLs.toArray(new URL[0]);
        return urls;
    }

    /**
     * Method that returns all jar files registered in the given URLClassloader
     * and which are present in dedicated classpath places.
     *
     * @return List of jar files URLs
     */
    private static URL[] getJarURLs(URLClassLoader cl) {
        URL[] result = cl.getURLs();
        ArrayList<URL> urls = new ArrayList();

        for (URL url : result) {

            try {
                Path jarPath = Paths.get(url.toURI());

                for (String classPathString : classPaths) {

                    Path classPath = Paths.get(classPathString).toAbsolutePath();

                    if (jarPath.startsWith(classPath)) {
                        urls.add(url);
                    }
                }

            } catch (URISyntaxException ex) {
            }
        }

        result = new URL[urls.size()];
        result = urls.toArray(result);

        return result;
    }

    /**
     * Method that returns all classes available underneath a given package
     * name.
     *
     * @return Set of Classes
     */
    private static TreeMap<String, Class> getClassesFromPackage(String pckgname, URLClassLoader cl) {
        TreeMap<String, Class> result = new TreeMap();
        ArrayList<File> files = new ArrayList();

        for (URL jarURL : getJarURLs(cl)) {
            getClassesInSamePackageFromJar(result, pckgname, jarURL.getPath(), cl);
        }

        return result;
    }

    /**
     * Method that fills TreeMap with all classes available in a particular jar
     * file, underneath a given package name.
     *
     */
    private static void getClassesInSamePackageFromJar(TreeMap<String, Class> result, String packageName, String jarPath, URLClassLoader cl) {
        JarFile jarFile = null;
        try {

            System.out.println("");
            System.out.println("** IN JAR : " + jarPath);
            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();
            while (en.hasMoreElements()) {
                JarEntry entry = en.nextElement();
                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

                if (entryName != null && entryName.endsWith(".class") && entryName.startsWith(packageName) && !entryName.substring(packageName.length() + 1).contains("/")) {

                    try {
                        Class<?> entryClass = cl.loadClass(entryName.substring(0, entryName.length() - 6).replace('/', '.'));
                        if (entryClass != null) {
                            result.put(entryClass.getCanonicalName(), entryClass);
                            System.out.println("   CLASS   : " + packageName + "." + entryClass.getCanonicalName());
                        }
                    } catch (Throwable e) {
//                        System.err.println("Error instanciating: " + entryName + " " + e.toString());
                    }
                }
            }
        } catch (Exception e) {
        } finally {
            try {
                if (jarFile != null) {
                    jarFile.close();
                }
            } catch (Exception e) {
            }
        }
    }

    /**
     * Method that returns all package names underneath a given root package
     * name.
     *
     * @return Set of Classes
     */
    private static TreeSet<String> getPackageFromPackage(String pckgname, URLClassLoader cl) {
        TreeSet<String> result = new TreeSet();

        for (URL jarURL : getJarURLs(cl)) {
            getPackageInPackageFromJar(result, pckgname, jarURL.getPath(), cl);
        }

        return result;
    }

    /**
     * Method that fills TreeMap with all package names in a particular jar
     * file, underneath a given root package name.
     *
     */
    private static void getPackageInPackageFromJar(TreeSet<String> result, String packageName, String jarPath, URLClassLoader cl) {
        JarFile jarFile = null;
        try {
            System.out.println("");
            System.out.println("** IN JAR : " + jarPath);

            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();
            while (en.hasMoreElements()) {
                JarEntry entry = en.nextElement();
                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

                if (entryName != null && entryName.endsWith("/") && entryName.startsWith(packageName + "/")) {

                    try {
                        String packageEntryName = entryName.substring(packageName.length() + 1);
                        packageEntryName = packageEntryName.substring(0, packageEntryName.indexOf("/"));

                        result.add(packageName.replace('/', '.') + "." + packageEntryName);

                        System.out.println("   PACKAGE : " + packageName.replace('/', '.') + "." + packageEntryName);
                    } catch (Throwable e) {
                    }
                }
            }
        } catch (Exception e) {
        } finally {
            try {
                if (jarFile != null) {
                    jarFile.close();
                }
            } catch (Exception e) {
            }
        }
    }

 

Cet article Java reflection: Get classes and packages names from a root package within a custom URLClassloader est apparu en premier sur Blog dbi services.

TRIGGER not dropping user in Oracle 11g?

Tom Kyte - Thu, 2016-08-18 07:46
Hi , i have database 11g , and i have user X and want to drop this user just once database startup . so i used this trigger : ---------------------------------------------------------------------------------------------------------- CREATE OR ...
Categories: DBA Blogs

diff between AS and IS in Subprogram syntax

Tom Kyte - Thu, 2016-08-18 07:46
IN The Syntax of Create procedure and function we have two keywords IS,AS what is the difference betwwen them how to use them.Belo I given A link https://livesql.oracle.com/apex/livesql/s/dqk6ejx3lpdaslymswdqfk347 By using is and as i can creat...
Categories: DBA Blogs

Copy huge table without the data on it.

Tom Kyte - Thu, 2016-08-18 07:46
Hi There, I have 2.7 TB table that contains lots of old data to be deleted. Due to the table usage and functionality I am planning to copy this table to a new one and rename after. My table is IOT partitioned and I want to keep the entire s...
Categories: DBA Blogs

Oracle row to column coversion

Tom Kyte - Thu, 2016-08-18 07:46
I was asked this question in an Interview and couldn't crack it. I have a Employee table with following values Id Name Sal 1 Sid 1000 2 Jon 800 3 Ram 600 I want the output to be displayed as follows: 1 2 3 Sid Jon Ram 10...
Categories: DBA Blogs

Convert EE to SE TABLE_EXISTS_ACTION=TRUNCATE

Tom Kyte - Thu, 2016-08-18 07:46
Hi Tom, I was asked to convert EE DB to SE DB with full expdp/impdp. My plan was to do it in 2 steps: 1. impdp to SE just to test if conversion will be ok. 2. impdp to SE with TABLE_EXISTS_ACTION=TRUNCATE with new dump Is this plan safe?...
Categories: DBA Blogs

Convert EE to SE remap_datafile

Tom Kyte - Thu, 2016-08-18 07:46
Hi Tom, I was asked to convert EE DB to SE DB on same machine. I have following doubts regarding creation of tablespaces and datafiles. 1. If I precreate TABLESPACES on SE database before full impdp then import logs error because it tries to...
Categories: DBA Blogs

Database migration from physical server to VM

Tom Kyte - Thu, 2016-08-18 07:46
Hi Top, We have a new project proposal to migrate the databases from physical servers to VM. When compared to physical servers, VM servers will have some performance impact. I need some information on finding the pre-requisites: 1. What are pe...
Categories: DBA Blogs

Basicfile LOBs

Jonathan Lewis - Thu, 2016-08-18 07:21

There are probably quite a lot of people still using Basicfile LOBs, although Oracle Corp. would like everyone to migrate to the (now default) Securefile LOBs. If you’re on Basicfile, though, and don’t want (or aren’t allowed) to change just yet here are a few notes that may help you understand some of the odd performance and storage effects.

Of course, there are a lot of variations in how you declare the LOB – pctversion vs. retention, cache vs. nocache, logging vs. nologging, enable vs. disable storage in row, and I can’t cover all the combinations – so what I’ll be looking at is a general strategy for handling a large number of small LOBs that are being inserted into the database at a fairly high degree of concurrency, and then being deleted a few weeks later so, in theory, the LOB segment should end up at a steady state with “N” days worth of data stored. The driver behind this modelling is a problem I was asked to examine a little while ago.

Some background details on Basicfile LOBs

If the LOB column is defined as “enable storage in row” then a very small LOB (up to 3,960 bytes) will be stored almost as if it were an ordinary column in the row; if the size of a LOB is a little larger than this limit then the LOB will be stored in “chunks” in the LOB segment and pointers to the chunks will be stored in the row – for the first 12 chunks. The chunk size defined for a LOB column can be up to 32KB – though the default chunk size is the block size for the containing tablespace –  so it’s possible to store a LOB of up to roughly 384KB before Oracle needs to create index entries in the LOBINDEX segment if you’re using 32KB blocks though most people use 8KB blocks and will hit the change at roughly 96KB.

If the LOB column is defined as “disable storage in row” then no matter how small it really is it will always take up at least one chunk in the LOB segment and will have a corresponding index entry in the LOBINDEX.  For reasons of efficiency an entry in a LOB index always “carries” 32 bytes of pointer data, allowing it to list up to 8 chunks.

When a LOB is deleted (replaced by a null, an empty_lob(), or a new LOB value) the previous state of the base table row and the LOB index will be preserved in the undo segment in the ordinary manner but the previous version of the LOB data itself is simply left in the segment (and a new version of the LOB created if the operation is an “update”). The chunks making up the old version are added to the LOB index with a key based on the time (seconds since 1st Jan 1970) the delete took place – this means that when Oracle wants to re-use space in the LOB segment it can walk the LOB index in order to find the chunks that were marked as available for reuse the greatest time into the past. (It also means that the LOB index is one of the strangest in the Oracle pantheon – part of it indexes “available chunks keyed by time” part of it indexes “current chunks keyed by LOB id”.

There are two options for how long old versions of LOBs will be kept: PCTVERSION specifies the percentage of space below the segment’s highwater mark that may be used to keep old versions, and (until 12c, where things change) RETENTION specifies that Oracle should try to keep old versions for the length of time given by the system parameter undo_retention. If enough versions of LOBs have been kept Oracle can create a read-consistent version of a given LOB by using the normal undo mechanisms to take the base table row and LOB index back to the correct point in time which will then ensure that the LOB pointers will be pointing to the correct chunks.  (If the LOB chunks have been over-written this is the point where you will get an Oracle error: “ORA-22924 Snapshot too old”, followed by a misleading “ORA-01555 Snapshot too old ….”)

One final interesting point from a performance perspective is that you define the LOB to be “nocache”, which means that typical reads and writes of the lob will use direct path; and if you’ve specified nologging then reads and writes of the LOB will generate tiny amounts of redo log.  Two special points to go with this, though: if you specify “nocache logging” the direct path writes will be logged, but the log content will be by chunk – so if you store 4,000 bytes of data in a LOB with a 32KB chunk size you will write 32KB of redo log; secondly if you are testing the effects of logging and nologging, make sure your test database is running in archivelog mode of your production database is going to be archiving – otherwise Oracle will fool you by taking a short cut and NOT logging a nocache LOB even if you specify logging! The LOB index is always cached and logged, by the way, and even if the LOB is defined as nocache there are circumstances where LOB blocks are read into the buffer cache (remember my previous note explaining that we saw 6 billion buffer gets on a nocache LOB).

The last detail I want to mention is the FREEPOOLS parameter. The description in the developers guide for 11.2 describes this as:  “Specifies the number of FREELIST groups for BasicFiles LOBs, if the database is in automatic undo mode.” Unfortunately freelists and freelist groups are things that happen in manual segment space management, so this definition requires an alternative meaning for the expression “FREELIST groups”. The purpose of FREEPOOLS is to help deal with concurrency problems but there’s not much information around to help you understand the mechanisms and pitfalls and the available documents on MoS don’t really do anything to clarify the position – and that’s what this article is (finally) going to talk about.

Basicfile FREEPOOLs – the truth is out there

When you specify FREEPOOLs you affect the way Oracle uses the LOB Index – not the space management information about the segment holding the index, the actual content of (in fact the KEY values held by) the index.

You can do a treedump of a LOB index by object_id in the standard way you can do a treedump of any B-tree (or bitmap) index, and you can dump blocks from a LOB index in the same way you dump any other data block in the database, by file number and block number (or block range), so it’s easy to see what happens in a LOB index when you start using multiple freepools. I’ve created a table holding a LOB defined as “disable storage in row” so that I always use the LOB index, inserted three rows, then deleted one of them and dumped the one index block (which happens to be both the root and a leaf). Here’s the SQL to create the table and do the data handling:


create table t1(
        id      number constraint t1_pk primary key,
        c1      clob
)
lob (c1)
store as basicfile text_lob(
        disable storage in row
        chunk 8k
        retention
        nocache
        tablespace test_8k_assm
)
;


declare
        m_v1 varchar2(32767) := rpad('x',12000,'x');
begin
        for i in 1..3 loop
                insert into t1 values (i, m_v1);
                commit;
        end loop;
end;
/


delete from t1 where id = 1;
commit;

alter system flush buffer_cache;

I’ve ended by flushing the buffer cache so that I don’t get a huge trace file when I try to dump the index to disc. Here’s the next bit of processing:


SQL> select object_id from user_objects where object_type = 'INDEX' and object_name like 'SYS_IL%';

 OBJECT_ID
----------
    241599

SQL> alter session set events 'immediate trace name treedump level 241599';

----- begin tree dump
leaf: 0x1800204 25166340 (0: nrow: 4 rrow: 3)
----- end tree dump

SQL> alter session dump datafile 6 block 516;

I’ve included in the above the treedump that I extracted from the tracefile, and this shows that the index consists of a single leaf block (0x1800204 = file 6 block 516) with 4 row directory entries of which one has been deleted. Here’s the row dump from that leaf block – the first three entries are the index entries identifying the three LOBs I created (and, as shown by the flag value “—D–“, the first has been marked as deleted) the fourth entry points to a set of free chunks (corresponding to the chunks that will becomem available for re-use after a delay corresponding to the undo retention time).


row#0[7982] flag: ---D--, lock: 2, len=50, data:(32):
 00 20 03 00 00 00 00 01 0f 1c 00 00 00 00 00 01 01 80 01 a6 01 80 01 aa 00
 00 00 00 00 00 00 00
col 0; len 10; (10):  00 00 00 01 00 00 09 d6 64 85
col 1; len 4; (4):  00 00 00 00

row#1[7932] flag: ------, lock: 0, len=50, data:(32):
 00 20 03 00 00 00 00 01 0f 1c 00 00 00 00 00 01 01 80 01 ae 01 80 01 b2 00
 00 00 00 00 00 00 00
col 0; len 10; (10):  00 00 00 01 00 00 09 d6 64 86
col 1; len 4; (4):  00 00 00 00

row#2[7882] flag: ------, lock: 0, len=50, data:(32):
 00 20 03 00 00 00 00 01 0f 1c 00 00 00 00 00 01 01 80 01 b6 01 80 01 ba 00
 00 00 00 00 00 00 00
col 0; len 10; (10):  00 00 00 01 00 00 09 d6 64 87
col 1; len 4; (4):  00 00 00 00

row#3[7832] flag: ------, lock: 2, len=50, data:(32):
 01 80 01 a6 01 80 01 aa 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 00 00 00 00 00 00 00
col 0; len 10; (10):  00 01 57 b3 32 9b 00 00 00 00
col 1; len 4; (4):  01 80 01 a6

If you examine the entries closely you will see that despite the common structure of all four of them there are two patterns. Ignoring the “data (32):” portion and looking at just “col 0” the last few bytes of the first three entries hold consecutive numbers which are actually the LOB Ids for the three LOBs. The fourth entry breaks that pattern and if you examine bytes 3 to 6 you will find that that is (approximately, by the time I publish this article) the number of seconds since 1st Jan 1970.

To a large degree you need only examine “col 0” to get a good idea of how Oracle handles the LOB index, but I will say just a few things about the rest of the entry.  For the “reusable space” index entries “col 1” is the first of a list of up to 8 chunks that were released from the same LOB at that moment, and the “data(32)” is the full list of those 8 chunks – each chunk is identified by the block address of the first block of the chunk. If I had created and deleted a LOB of roughly 128KB I would have used 16 chunks to create it and made 16 chunks available on deletion, so there would have been two index entries with the same “col 0” value, each identifying 8 of the chunks – hence the re-appearance of the first chunk as “col 1”.  (You can’t help wondering why Oracle doesn’t squeeze 9 chunks per index entry rather than repeating the first of the list – maybe there’s a peripheral effect that makes 8 easier, maybe it’s simply a good sanity check mechanism.)

For index entries about current LOBs “col 1” is a counter for the index entries that identify the entire LOBs. Our LOBs were all very small so we only needed one index entry (which Oracle starts counting from zero).  The “data (32)” entry for the “zeroth” entry starts with 16 bytes of metadata then holds up to 4 pointers to chunks; subsequent entries don’t need the metadata and can hold up to 8 pointers each and “col 1” stores the chunk number that the index entry starts with, so “col 1” in consecutive index entries for a given LOB id will have values 0, 4, 12, 20, etc.

You might note, by the way, that my LOBs are not made up of consecutive blocks even though my chunk size is exactly one block. This is a side effect of ASSM (automatic segment space management) and nothing specifically to do with LOBs.

With this sketch in place you now have some idea of how a LOB index works. Apart from the convenience of knowing roughly what information is stored in the index, and how it has this strange dual purpose, you can now view it just like any other B-tree index in Oracle. When you insert a LOB you insert some index entries into the middle of the index (the high-value point of the LOB Id bit), when you delete a LOB you mark some (consecutive) index entries as deleted and insert some index entries at the high end of the index (because each delete is the most recent delete).

As soon as you’ve got that far you realise that if you have some degree of concurrency of inserts and deletes then you have two hot spots in the index – the mid point where you’re going to get lots of 50/50 splits as LOBs are inserted and the end point where all the re-usable chunks are indexed. So how do you configure Oracle, and what does Oracle do, to reduce the contention ?

Take another look at the “col 0” values – which I’ve cut out and listed in isolation below:


col 0; len 10; (10):  00 00 00 01 00 00 09 d6 64 85
col 0; len 10; (10):  00 00 00 01 00 00 09 d6 64 86
col 0; len 10; (10):  00 00 00 01 00 00 09 d6 64 87

col 0; len 10; (10):  00 01 57 b3 32 9b 00 00 00 00

Apart from making it easy to see the sequencing in the 3 LOB ids it’s now easy to note that the first three (LOB) entries start with “00 00” while the last (reusable space) entry starts with “00 01”. It’s really this starting two bytes that makes it easy for Oracle to separate the current LOBs section of the index from the reusable space section. The two bytes are the freepool identifier – it’s the first (and only) free pool – but Oracle is counting from zero, doubling the counter for the current LOBs, and doubling and adding one for the reusable space.

Here are some results when I drop and recreate the table with freepools 4 and repeat the experiment. (I’ve removed the “data(32)” content to make the output a little cleaner, and then extracted the “col 0” values).


row#0[7982] flag: ---D--, lock: 2, len=50, data:(32):
col 0; len 10; (10):  00 06 00 01 00 00 09 da 36 55
col 1; len 4; (4):  00 00 00 00

row#1[7932] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 06 00 01 00 00 09 da 36 56
col 1; len 4; (4):  00 00 00 00

row#2[7882] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 06 00 01 00 00 09 da 36 57
col 1; len 4; (4):  00 00 00 00

row#3[7832] flag: ------, lock: 2, len=50, data:(32):
col 0; len 10; (10):  00 07 57 b3 3b a5 00 00 00 00
col 1; len 4; (4):  01 80 01 df

===

col 0; len 10; (10): 00 06 00 01 00 00 09 da 36 55
col 0; len 10; (10): 00 06 00 01 00 00 09 da 36 56
col 0; len 10; (10): 00 06 00 01 00 00 09 da 36 57

col 0; len 10; (10): 00 07 57 b3 3b a5 00 00 00 00

It just happened that with 4 freepools available my session picked freepool 3 so its LOB index entries are preceded with 00 06 (2 * 3), and it’s reusable space index entries are preceded with 00 07 (2 * 3 + 1). At present I think freepool chosen by a session (counting from zero) is derived from the session’s process id (pid) by a simple mod(pid , freepools).

So what happens if I start a second session, and adjust my little PL/SQL procedure to insert rows 4, 5, and 6.

I expect to see two things. First, the “ordinary” B-tree event – the index entry that’s marked for deletion will be cleared out of the index; secondly I should see four new index entries (one marked as deleted) which, with a little luck (one chance in four), will show that they are associated with a different freepool.

Here’s the dump (again with the “data(32)” deleted, and the “col 0” extracted at the end):


row#0[7782] flag: ---D--, lock: 2, len=50, data:(32):
col 0; len 10; (10):  00 00 00 01 00 00 09 da 36 87
col 1; len 4; (4):  00 00 00 00

row#1[7732] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 00 00 01 00 00 09 da 36 88
col 1; len 4; (4):  00 00 00 00

row#2[7682] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 00 00 01 00 00 09 da 36 89
col 1; len 4; (4):  00 00 00 00

row#3[7632] flag: ------, lock: 2, len=50, data:(32):
col 0; len 10; (10):  00 01 57 b3 3b ad 00 00 00 00
col 1; len 4; (4):  01 80 01 a4

row#4[7932] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 06 00 01 00 00 09 da 36 56
col 1; len 4; (4):  00 00 00 00

row#5[7882] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 06 00 01 00 00 09 da 36 57
col 1; len 4; (4):  00 00 00 00

row#6[7832] flag: ------, lock: 0, len=50, data:(32):
col 0; len 10; (10):  00 07 57 b3 3b a5 00 00 00 00
col 1; len 4; (4):  01 80 01 df

===

col 0; len 10; (10): 00 00 00 01 00 00 09 da 36 87
col 0; len 10; (10): 00 00 00 01 00 00 09 da 36 88
col 0; len 10; (10): 00 00 00 01 00 00 09 da 36 89

col 0; len 10; (10): 00 01 57 b3 3b ad 00 00 00 00

col 0; len 10; (10): 00 06 00 01 00 00 09 da 36 56
col 0; len 10; (10): 00 06 00 01 00 00 09 da 36 57

col 0; len 10; (10): 00 07 57 b3 3b a5 00 00 00 00

The index entry previously marked as deleted has disappeared (it was LOB id “09 da 36 56”).

We have four new index entries – the first 4 in the list above – and we can see that our second session has been allocated to freepool 0, the LOB index entries are preceded by “00 00”, and the reusable space index entry is preceded by “00 01”.

So by declaring freepools N, we effectively break the index up into 2N nearly discrete sections. Half the sections get inserts at the high end as we insert new LOBs (with ever increasing LOB ids) and the other half (apart, sometimes, from the very top section) get inserts at the high end as time passes and we make LOB space available for reuse by deleting existing LOBs. (Note – if two LOBs of more than 8 chunks each are deleted in the same hundredth of a second then there index entries may end up interleaving as the full key is (timestamp, first chunk address) and the chunks may be scattered widely across the tablespace). Freepools allow Oracle to remove the two killer hot spots in the index.

There are side effects, of course: apart from the section for reusable space in the top freepool each section of the index will be subject to 50/50 block splits so the index will always be roughly twice the optimum size – plus a lot more due to other side effects of how the index is used if you’re constantly deleting and inserting LOBs. But that’s not really a big problem; I’ll be examining further side effects of the LOB index, and the mechanism that Oracle has for using the index, and the performance threats this introduces, in the next installment.

I got a bit rushed writing this so there may be some typos and grammar errors which I’ll correct over the next couple of days. Feel free to point out any that you notice.


I’m Logesh Balasubramaniam and this is how I work

Duncan Davies - Thu, 2016-08-18 07:00

The next profile in our ‘How I Work‘ series is Logesh Balasubramaniam. Logesh is one of the newer bloggers – posting on his LeanIT Designs site – however he provides great content and is usually one of the first to tackle new functionality. He has sometimes even managed to blog about new features before Oracle’s official post.

If you’re a fan of Logesh’s work it looks like he’s been selected to present at OOW this year, alongside his colleague Chamanthi Weerasinghe, on “Bang for Your Buck: A Practical Approach to Optimize Your Selective Adoption”.

Logesh

Name: Logesh Balasubramaniam

Occupation: PeopleSoft Consultant at Presence Of IT, Australia.
Location: Auckland, New Zealand.
Current computer: Lenovo Thinkpad T440s
Current mobile devices: Samsung Galaxy S3 & iPad Mini 2
I work: To achieve best possible outcome in the realms of Scope, Time & Cost

What apps/software/tools can’t you live without?
Apps – Kindle, linkedin, Keeper(Passwords), doubleTwist
Desktop essentials: MS office, Outlook, Notepad++, Snipping Tool

What’s your workspace like?
I am minimalist by nature and my workplace will reflect it. I don’t tend to carry any baggage(both hard & soft) and have a compulsion to get rid of anything after its purpose.

desk

What do you listen to while you work?
Silence, help me to focus. Leave my windows open and let in everything that my surroundings offer.

What PeopleSoft-related productivity apps do you use?
Browsers: Chrome & Firefox, Developer tools in both browsers, Extensions – Wizdler for chrome, HTTPFox for Firefox
PeopleTools – Whole suite of development tools and occasionally administration utilities
SOAPUI – Everything integration related
SQL Developer and SQL Server management studio

Do you have a 2-line tip that some others might not know?
I find ‘My Oracle Support Community (MOSC)’ to be very active and useful in PeopleSoft application space. Especially for those working on new features in recent product versions, post your queries and you will get immediate help with Oracle specialists. I also find Red Papers released on various topics by Oracle to be a very valuable resource to design solutions especially in integration, security and performance areas.

What SQL/Code do you find yourself writing most often?
Having spent most of my time in HCM application, subquery to get MAX(EFFDT), tops my list.

What would be the one item you’d add to PeopleSoft if you could?
Support/implement SAML based identity federation.

What everyday thing are you better at than anyone else?
Starting every day with the same eagerness, desire and will I had on my first day at work.

What’s the best advice you’ve ever received?
Nothing specific from a person, but an elementary quote that I find most relevant in our profession is “A stitch in time save nine”. As remote work culture and diverse work teams become ubiquitous, proactive communication is key to successful collaboration.


Oracle TO_CLOB Function with Examples

Complete IT Professional - Thu, 2016-08-18 06:00
In this article, I’ll take a look at the Oracle TO_CLOB function, and show you some examples. Purpose of the Oracle TO_CLOB Function The TO_CLOB function converts a value in an NCLOB column (the national character set) to a CLOB (in the database character set).   Syntax The syntax of the TO_CLOB function is quite […]
Categories: Development

Ravello cloud virtualization

Pat Shuff - Thu, 2016-08-18 02:07
Yesterday we talked about what it would take to go from a bare metal solution or virtualized solution in your data center to a cloud vendor. We found out that it is not only difficult but it requires some work to make it happen. There are tools to convert your VMDK code to AMI with Amazon or VHD with Microsoft or tar.gz format with Oracle. That's the fundamental problem. There are tools to convert. You can't just simply pull a backup of your bare metal install or the VMDK code and upload and run it. Java ran into this problem during the early years. You could not take a C or C++ bundle of code, take the binary and run in on a Mac or Linux or Windows. You had to recompile your code and hopefully the libc or libc++ library was compatible from operating system to operating system. A simple recompile should solve the problem but the majority of the time it required a conditional compile or different library on a different operating system to make things work. The basic problem was things like network connections or reading and writing from a disk was radically different. On Windows you use a forward slash and on Linux and MacOS you use a backslash. File names and length are different and can or can't use different characters. Unfortunately, the same is true in cloud world. A virtual network interface is not the same between all of the vendors. Network storage might be accessible through an iSCSI mount, an NFS mount, or only a REST API. The virtual compute definition changes from cloud vendor to cloud vendor thus creating a need for a virtualization shim similar to a programming shim as Java did a few decades ago. Ravello stepped in and filled this gap for the four major cloud vendors.

Ravello Systems stepped in a few years ago and took the VMDK disk image proposed by VMWare and wrote three components to virtualize a cloud vendor to look like a VMWare system. The three components are nested virtualization, software defined networking, and virtual storage interfaces. The idea was to take not only a single system that made up a solution but a group of VMWare instances and import them into a cloud vendor unchanged. The user took a graphical user interface and mapped the network relationships between the instances and deployed these virtual images into a cloud vendor. The basis of the solution was to deploy the Ravello HVX hypervisor emulator onto a compute instance in the cloud vendor for each instance then deploy the VMWare VMDK on top of the HVX instance. Once this was done the storage and network interfaces were mapped according to the graphical user interface connections and the VMDK code could run unchanged.

Running a virtual instance unchanged was a radical concept. So radical that Oracle purchased Ravello Systems early this spring and expanded the sales force of the organization. The three key challenges faced by Ravello was that 50% of the workloads that run in customer data centers do not port well to the cloud, many of these applications utilize layer 2 IP protocols which are typically not available in most cloud environments, and VMWare implementations on different hardware vendors generate different virtual code and configurations enough to make it difficult to map it to any cloud vendor. The first solution was to virtualize the VMWare ESX and ESXi environment and layer it on top of multiple cloud vendor solutions. When an admin allocates a processor does this mean a thread as it does in AWS or a core as it does in Azure and Oracle? When a network is allocated and given a NAT configuration, can this be done on the cloud infrastructure or does it need to be emulated in the HVX?

The nested virtualization engine was designed to run VMWare saved code natively without change. Devices from the cloud vendor were exposed to the code as VMWare devices and virtual devices. The concept was to minimize the differences between different cloud solutions and make the processor and hypervisor look as much like ESX and ESXi as possible. HVX employs a technology called Binary Translation to implement high-performance virtualization that does not require these virtualization extensions. When virtualization extensions are available, the easiest way to implement the illusion is using "trap and emulate" .Trap and emulate works as follows. The hypervisor configures the processor so that any instruction that can potentially "break the illusion" (e.g., accessing the memory of the hypervisor itself) will generate a "trap". This trap will interrupt the guest and will transfer control to the hypervisor. The hypervisor then examines the offending instruction, emulates it in a safe way, and then it will allow the guest to continue executing. HVX, the Ravello hypervisor, uses a technology called binary translation. Unlike the trap-and-emulate method, binary translation does work when virtualization extensions are not available.

Pure L2 access is difficult and VLANs, span ports, broadcast/multicasting usually do not work. Ravello allows you to run existing multi-VM applications unmodified in the cloud, not just single virtual machines. To make this possible, Ravello provides a software-defined network that virtualizes the connectivity between the virtual machines in an application. The virtual network is completely user-defined and can include multiple subnets, routers, and supplemental services such as DHCP, DNS servers and firewalls. The virtual network can be made to look exactly like a datacenter network. The data plane of the virtual network is formed by a fully distributed virtual switch and virtual router software component that resides within HVX. Network packets that are sent by a VM are intercepted and injected into the switch. The switch operates very similar to a regular network switch. For each virtual network device, the virtual switch creates a virtual port that handles incoming and outgoing packets from the connected virtual NIC device.

Ravello’s storage overlay solution focuses on performance, persistence and security. It abstracts native cloud storage primitives such as object storage and various types of block devices into local block devices exposed directly to the guest VMs. Everything from the device type and controller type to the location on the PCI bus remains the same. Hence it appears to the guest as-if it was running in its original data-centre infrastructure. This allows the guest VM to run exactly as is with its storage configuration as if it was running on premises. Cloud storage abstraction (and presentation as a local block device), coupled with the HVX overlay networking capabilities allows for running various NAS appliances and their consumption over network based protocols such as iSCSI, NFS, CIFS and SMB. These block devices are backed by a high performance copy-on-write filesystem which allows us to implement our multi-VM incremental snapshot feature.

We could walk through a hands on lab developed by the Ravello team to show how to import a Primavera on site deployment into the Oracle Compute Cloud. The block diagram looks like the picture shown below. We import all of the VMDK files and connect the instances together using the GUI based application configuration tool.

Once we have the instances imported we can configure the network interfaces by adding a virtual switch, virtual gateway, virtual nic, assigning public IP addresses, and adding a VLAN to the configuration.

Ravello allows us to define features that are not supported with cloud vendors. For example, Amazon and Microsoft don't allow layer 2 routing and multicast broadcasting. VMWare allows for both. The HVX layer traps these calls and emulates these features by doing things like ping over TCP or multicast broadcasts by opening connections to all hosts on the network and sending packets to each host. In summary, Ravello allows you to take your existing virtualization engine from VMWare and deploys it to virtually any cloud compute engine. The HVX hypervisor provides the shim and even expands some of the features and functions that VMWare provides to cloud vendors. Functions like layer 2 routing, VLAN tagging, and multicast/broadcast packets are supported through the HVX layer between instances.

Configuring Multiple local_listener

Michael Dinh - Thu, 2016-08-18 00:16

I was working on configuring multiple local listeners and having difficulties setting local_listener using full address list.

How would you set up local_listeners?

Demo: There are 4 listeners, 1-4.
$ lsnrctl status listener1

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:34:31

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1551)))
STATUS of the LISTENER
------------------------
Alias                     listener1
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:08
Uptime                    0 days 0 hr. 0 min. 22 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1551)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1551)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

$ lsnrctl status listener2

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:34:33

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1552)))
STATUS of the LISTENER
------------------------
Alias                     listener2
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:10
Uptime                    0 days 0 hr. 0 min. 23 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1552)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1552)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

$ lsnrctl status listener3

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:34:35

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1553)))
STATUS of the LISTENER
------------------------
Alias                     listener3
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:13
Uptime                    0 days 0 hr. 0 min. 21 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener3/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1553)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1553)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

$ lsnrctl status listener4

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:34:36

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1554)))
STATUS of the LISTENER
------------------------
Alias                     listener4
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:15
Uptime                    0 days 0 hr. 0 min. 21 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener4/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1554)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1554)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
Modify database LOCAL_LISTENER parameter.
alter system set LOCAL_LISTENER="(ADDRESS_LIST=
)";
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1551))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1552))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1553))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1554))
  6  )";
alter system set LOCAL_LISTENER="(ADDRESS_LIST=
                                *
ERROR at line 1:
ORA-00972: identifier is too long


alter system set LOCAL_LISTENER="(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1551))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1552))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1553))
  5  )";
alter system set LOCAL_LISTENER="(ADDRESS_LIST=
                                *
ERROR at line 1:
ORA-00972: identifier is too long


alter system set LOCAL_LISTENER="(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1551))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1552))
  4  )";

System altered.

ARROW:(SYS@leo):PRIMARY> show parameter local_listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
local_listener                       string      (ADDRESS_LIST=
                                                 (ADDRESS=(PROTOCOL=TCP)(HOST=1
                                                 92.168.56.11)(PORT=1551))
                                                 (ADDRESS=(PROTOCOL=TCP)(HOST=1
                                                 92.168.56.11)(PORT=1552))
                                                 )
ARROW:(SYS@leo):PRIMARY>
Using single quote works.
ARROW:(SYS@leo):PRIMARY>
alter system set LOCAL_LISTENER='(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1551))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1552))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1553))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT=1554)))'
  6  ;

System altered.

ARROW:(SYS@leo):PRIMARY> show parameter local_listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
local_listener                       string      (ADDRESS_LIST=
                                                 (ADDRESS=(PROTOCOL=TCP)(HOST=1
                                                 92.168.56.11)(PORT=1551))
                                                 (ADDRESS=(PROTOCOL=TCP)(HOST=1
                                                 92.168.56.11)(PORT=1552))
                                                 (ADDRESS=(PROTOCOL=TCP)(HOST=1
                                                 92.168.56.11)(PORT=1553))
                                                 (ADDRESS=(PROTOCOL=TCP)(HOST=1
                                                 92.168.56.11)(PORT=1554)))
ARROW:(SYS@leo):PRIMARY>

Interesting.
$ oerr ora 972
00972, 00000, "identifier is too long"
// *Cause:  An identifier with more than 30 characters was specified.
// *Action:  Specify at most 30 characters.
Configure tnsnames for listeners
$ cat tnsnames.ora
leo =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = arrow.localdomain)(PORT = 1531))
    )
    (CONNECT_DATA =
      (SID = leo)
    )
  )

LISTENER_1551_1554 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1551))
      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1552))
      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1553))
      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1554))
    )
  )

$ tnsping LISTENER_1551_1554

TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:48:43

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1551)) (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1552)) (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1553)) (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.11)(PORT= 1554))))
OK (0 msec)
Modify database LOCAL_LISTENER parameter using tnsnames and register listener.
ARROW:(SYS@leo):PRIMARY> alter system set LOCAL_LISTENER="LISTENER_1551_1554";

System altered.

ARROW:(SYS@leo):PRIMARY> alter system register;

System altered.

ARROW:(SYS@leo):PRIMARY> show parameter local_listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
local_listener                       string      LISTENER_1551_1554
ARROW:(SYS@leo):PRIMARY>
Service is now ready.
$ lsnrctl status listener1

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:51:26

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1551)))
STATUS of the LISTENER
------------------------
Alias                     listener1
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:08
Uptime                    0 days 0 hr. 17 min. 17 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1551)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1551)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
Service "leo_a" has 1 instance(s).
  Instance "leo01", status READY, has 1 handler(s) for this service...
The command completed successfully

$ lsnrctl status listener2

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:51:29

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1552)))
STATUS of the LISTENER
------------------------
Alias                     listener2
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:10
Uptime                    0 days 0 hr. 17 min. 19 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1552)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1552)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
Service "leo_a" has 1 instance(s).
  Instance "leo01", status READY, has 1 handler(s) for this service...
The command completed successfully

$ lsnrctl status listener3

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:51:31

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1553)))
STATUS of the LISTENER
------------------------
Alias                     listener3
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:13
Uptime                    0 days 0 hr. 17 min. 17 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener3/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1553)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1553)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
Service "leo_a" has 1 instance(s).
  Instance "leo01", status READY, has 1 handler(s) for this service...
The command completed successfully

$ lsnrctl status listener4

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 17-AUG-2016 21:51:33

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1554)))
STATUS of the LISTENER
------------------------
Alias                     listener4
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date                17-AUG-2016 21:34:15
Uptime                    0 days 0 hr. 17 min. 17 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0.4/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/arrow/listener4/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1554)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1554)))
Services Summary...
Service "leo" has 1 instance(s).
  Instance "leo", status UNKNOWN, has 1 handler(s) for this service...
Service "leo_a" has 1 instance(s).
  Instance "leo01", status READY, has 1 handler(s) for this service...
The command completed successfully

Cool Essbase MDX Stuff – Dimension Properties Edition

Tim Tow - Wed, 2016-08-17 22:12
I have always liked MDX as it exposes very powerful operations for creating member sets.  It has this amazing ability to take a set of members, union it with another set of members, intersect it with yet another set of members, and then exclude members from yet another set.  This is very powerful.

Recently, I was talking with a customer about some creative uses for MDX and they told me about some cool things they had done with MDX.  Kudos to George Cooper and Esam Jaber at Gap for showing me some creative MDX.  Though this is a really cool and new-to-me technique, after doing some online searches I found that the technique isn’t new to everyone.  Both Gary Crisci and Harry Gates have blogged on it already at http://garycris.blogspot.com/2014/03/mdx-queries-to-get-parentchild-and.html and http://www.cubesavvy.com/cubesavvy-utilities-updated-mdx-capabilities, respectively.

This technique, which gives developers access to outline information that was not previously available, also seems to be lightning fast.  I decided maybe I need to write on it as well and, of course, put my own spin on it by showing how we can use this in the Dodeca Spreadsheet Management System.

So, what is it?  They are called dimension properties and can utilize the PROPERTY_EXPR function.  This function provides the ability to query for, and return, members related to a given member.  The ability to return related members, on the same row, is something that is unique to the Essbase query languages.  The Grid API, or query-by-example and used to retrieve data from Essbase in the classic add-in, Smart View, and Dodeca, cannot do it out of the box.  The Essbase Report Script language, which is frequently used for exporting data, cannot do it either.  Regardless, we have customers who have asked to how have the parent member and/or the grandparent member on the same row as a member.   In the remainder of this blog post, I will explain dimension properties, and more specifically, the PROPERTY_EXPR function.

For this post, I will use the Geography dimension of the ASOSamp.Sample database.  The Geography dimension features members from different regions of the United States including details of the Region, the State, the City, and the Postal Code.  Here is a screenshot of some members in the Geography dimension:

Geography dimension members
With this dimension, let’s suppose you would like to get outline information at the city level, or level 1 in Essbase-speak.  In addition, let’s say you would like to have the state and the region as well.   Finally, while we are at it, maybe you also want to know the generation number of the member.  With dimension properties and the PROPERTY_EXPR function, you can get all of that information in one trip to the server.  Here is the query:

Select {} on COLUMNS,
Hierarchize(Descendants(Geography, Geography.Levels(1), SELF_AND_BEFORE), POST)
DIMENSION PROPERTIES
 GEN_NUMBER,
  PROPERTY_EXPR(Geography, MEMBER_NAME, Ancestor(CurrentAxisMember(), Geography.Generations(2)),"Gen2"),
  PROPERTY_EXPR(Geography, MEMBER_NAME, Ancestor(CurrentAxisMember(), Geography.Generations(3)),"Gen3")
 ON ROWS
FROM ASOSamp.Sample

Let’s break down the query to examine the components.  First, the Column axis specification is simple:

Select {} on COLUMNS,

The Column axis specification contains an empty set, but why do you need to even specify a set at all?  The Essbase MDX specification states that a query cannot skip axes based on a set order.  It seems much more natural for me to get data back on the second axis, or the Row axis, and as the Column axis is the first specified axis, a query cannot skip the Column axis and specify a Row axis.  As we really aren’t looking for any data to be returned in this query, then we can just use an empty set.

Next, let’s look at the Row axis specification:

Hierarchize(Descendants(Geography, Geography.Levels(1), SELF_AND_BEFORE), POST)

Let’s split this into sections starting from the inside out.  The Descendants function returns, naturally, all of the descendants of a given member down to, and including, level 1 members in the outline.  For this database, this query will return the Geography dimension down to the City level, but will not return the bottom, or zip code, level.

The member set returned by the Descendants function is then sorted by the Hierarchize function.

Hierarchize(Descendants(Geography, Geography.Levels(1), SELF_AND_BEFORE), POST)

The POST argument specifies that child members are sorted before their parent as they are in the spreadsheet add-ins.

Next, the DIMENSION PROPERTIES modifier for the Row axis specification provides the ability to return additional outline information related to each member returned in the set.  The DIMENSION PROPERTIES specify that the generation number, the related generation 2 and generation 3 members are returned for each member in the set.

DIMENSION PROPERTIES
 GEN_NUMBER,
  PROPERTY_EXPR(Geography, MEMBER_NAME, Ancestor(CurrentAxisMember(), Geography.Generations(2)),"Gen2"),
  PROPERTY_EXPR(Geography, MEMBER_NAME, Ancestor(CurrentAxisMember(), Geography.Generations(3)),"Gen3")

The GEN_NUMBER argument is self-explanatory, so let’s look at the PROPERTY_EXPR function to see how it works.  Consider this portion of the statement:

PROPERTY_EXPR(Geography, MEMBER_NAME, Ancestor(CurrentAxisMember(), Geography.Generations(2)),"Gen2")

The first argument is the dimension name, so that is easy enough.

The second argument, called the property_name argument, has a number of valid values as documented by Oracle in the documentation.  Valid values are MEMBER_NAME, MEMBER_ALIAS, LEVEL_NUMBER, GEN_NUMBER, IS_EXPENSE, COMMENTS, RELATIONAL_DESCENDANTS, MEMBER_UNIQUE_NAME), an attribute dimension name, an alias-table name, or a UDA.  At the time of this writing, the current docs are located at http://docs.oracle.com/cd/E26232_01/doc.11122/esb_tech_ref/frameset.htm?mdx_property_expressions.html.

The third argument is called member_value_expression and this is where the magic happens.  In this argument, you can use a number of functions that return exactly one member that is related to the current member.  In my example above, the Ancestor function is being used to return, for each member in the set as specified by the CurrentAxisMember function, the ancestor of that member at a given generation.  Among other functions are functions that allow you to get the parent, first child, next sibling, or previous sibling of the current member.

The fourth and final argument is used to give a title to the column containing the extended information.

So, now that we know a bit about this syntax, what does it look like in EAS?  

MDX dimension properties in EASIn their blogs, both Gary and Harry talk about ways to actually consume this information.  Harry wrote a special interface to display dimension properties returned by MDX.  Gary talked about the idea of using an Excel macro to parse the output.  I had my own ideas on how to use dimension properties in Dodeca.  I setup a simple Dodeca report and used a only 3 lines of our automation language, workbook scripting, to build the view.  Here is screenshot of my simple Dodeca view.

Dynamic MDX View in DodecaWhile running this view in Dodeca, the user can filter based on Geography, Product, and Stores dimensions and the report is highly dynamic.  The automation in Dodeca performs these tasks:
  1. Returns members and properties as the descendants of a user-selected Geography member
  2. Retrieves and places the dimension properties and the members on the worksheet
  3. Places the selected Product and Stores dimension members in the proper location
  4. Retrieves Essbase data into the worksheet
  5. Creates Excel grouping based on the generation number
I will leave the step-by-step of creating this view in Dodeca to another blog post.

So, how could you use dimension properties?


Categories: BI & Warehousing

Approvals 1.5 for E-Business Suite Now Available

Steven Chan - Wed, 2016-08-17 16:52
Approvals for EBS mobile screenshotA new version of the E-Business Suite Approvals mobile app is now available for iOS and Android.

Approvals for EBS 1.5 is part of a coordinated release of 18 E-Business Suite mobile apps.  These apps are designed to work with EBS 12.1.3 and 12.2 and are available for iOS and Android. 

This is our sixth set of updates to this family of EBS smartphone apps (a.k.a. "Release 6"). For details about the entire set of EBS mobile apps, see:

What's New?

Updates to our Approvals for EBS 1.5 app include:

  • New approval types for:
    • Human Resources (person status change)
    • Projects (projects, budgets)
    • Service Contracts
  • Ability to customize seeded metadata
  • Enhancement to Quoting approvals to support customization
  • Mobile Foundation
    • Ability to open links to external websites within the app on Android
    • Updated look and feel, including native look and feel for action sheets on iOS
    • Ability to easily clear user credentials on the Sign-In page
    • Use of Oracle Mobile Application Framework (MAF) 2.3.1

Some of these new features require EBS server-side patches.  See the documentation above for details.

Related Articles

Categories: APPS Blogs

Conditional Where clause with decode

Tom Kyte - Wed, 2016-08-17 13:26
Hello there, Good Day! I have a query at hand: <code> SELECT mf_trn_id FROM mf_transactions WHERE MF_TRN_AGENT_CD = :b1 AND MF_TRN_PAN_NO = :b2 AND MF_TRN_SCH_CD = :b3 AND MF_TRN_COMP_CD = :b4 AND MF_TRN_CD = :b5 AND M...
Categories: DBA Blogs

Join a large table ( 70 Million Records) with two other tables

Tom Kyte - Wed, 2016-08-17 13:26
Hi, I have a requirement where I have to join three tables, say Employee (~70 Million records) with department (~2000 records) and Cities (~2000 records). The query looks something like <b>select /*20 fields*/ from employee e,department d,ci...
Categories: DBA Blogs

Host I/O Queue Depth with XtremIO and SLOB Session Count. A Granular Look. Part II.

Kevin Closson - Wed, 2016-08-17 12:52

In my recent post about aggregate host I/O queue depth I shared both 100% SQL SELECT and 20% SQL UPDATE test results (SLOB) at varying LUN (ASM disk) counts. The LUNs mapped to XtremIO volumes but the assertions in that post were really applicable in most All-Flash Array situations.

I received quite a bit of email from readers about the granularity of session counts shown in the charts in that post. Overwhelmingly, folks asked to see more granular data. It so happens that the charts in that post were a mere snippet of the test suite results so I charted the full data set and am posting them here.

Figures 1 and 2 below show 1,2,4,8 and 16 host LUN results charted from 1 to 450 sessions for the read-only SLOB test and then 1 to 250 sessions for the 20% SQL UPDATE results. Remember that the sessions are zero think-time in this testing, therefore, IOPS are a direct reflection of latency and in this case latency is majority attributed to host queueing as I explained in the prior post.

The prime message in this data is the Total IOPS values demonstrated at even low host LUN counts and, as such, it makes little sense to create complex ASM disk groups (consisting of large numbers of host LUNs mapped to All-Flash Array storage like XtremIO). Unless, that is, you manage one of the very few production databases that demands IOPS above 100,000. I know these databases exist, but there aren’t as many of them as some might think. High IOPS-capable platforms like XtremIO are generally used for consolidation.

If you click on the image you can get the full-size chart.

Enjoy!

0pct-450

Figure 1

 

20pct-250

Figure 2

 

 


Filed under: oracle

uploading custom boot image for compute cloud

Pat Shuff - Wed, 2016-08-17 12:45
One of the things that we talked about yesterday was uploading an image for a bundled solution into the Oracle Cloud Marketplace. Partners can register to bundle and sell solutions on the Oracle Cloud by providing a bootable image and have that image run in the Oracle Compute Cloud. Some examples of this are All of these solutions started with a base operating system and layered other products onto the operating system. They then took the resulting virtual machine configuration and bundled it into a tar.gz file and uploaded it to the Marketplace. Once this is done it can be offered as a product through the search engine and tracked, marketed, and sold by the partner. The resulting boot image is loaded into your private images and allows you to boot these instances as a compute service where you select the processor, memory, disk, and network configurations. The beauty of this interface is that it is the same that you would use if you wanted to create a custom boot image of your own. This enables features like snapshots of instances so that you can clone instances, replicate instances for scaling, and backup instances to a common storage location for fast disaster recovery.

Before we dive into how to create a bootable image for the Oracle Compute Cloud, let's review some basics so that we have the same terminology and a common language to discuss how to take an instance from your data center and run it in the Oracle Compute Cloud. We will also look at the steps needed for AWS and Azure and what it takes to upload an image to their compute clouds. Some of the different ways that you can run an application in your data center are

  • Bare Metal - load an operating system on a computer and load applications on the operating system. Backup the instance and restore into cloud
  • Oracle VirtualBox - create a virtual instance on your desktop/laptop/server and convert the resulting OVA/VDI into an uploadable format
  • VMWare ESX - create a virtual instance on your VMWare cluster and convert the resulting VMDK images into an uploadable format
  • Citrix or public domain Xen Server - create a virtual instance on your Xen server and convert the VMDK images into an uploadable format
  • Microsoft HyperV - create a virtual instance on your HyperV server and convert the VHD image into an uploadable format
The key difference between all of these steps are the conversion into an uploadable format for the target cloud provider. Oracle Compute Cloud currently requires that you create a tar.gz binary from a supported operating system and load it into your private image space. Documentation on creating your own image is available from Oracle and goes into detail on what operating systems are currently supported and references a Tutorial on building a LAMP Stack instance. In this example the tutorial uses VirtualBox to boot from an iso that is downloaded from edelivery.oracle.com which is the traditional way of building an image on bare metal or a virtual server.

We will not go through the installation process with images as we usually do because the tutorial does a very good job of showing you how to do this. The process takes a while (budget a half day) to setup VirtualBox, download the iso image, load and boot the Linux instance, patch the instance, install the Apache Web Server, MySQL, and PHP. The most difficult part of this is configuration of the network. The tutorial suggests that you turn off selinux and iptables. We suggest that you go to the extra effort to enable these services and open up the ports needed to communicate to your services. Leaving these services open inside any cloud providers internal network and relying upon the external firewall and routing rules is a slippery slope to insecurity. We suggest multiple layers of security at the cloud admin layer, at the network configuration layer with whitelist routing, and at the operating system layer. You might need to first allow connection to port 80 from anywhere at the operating system layer and tighten up these rules once you know where your clients are coming from but turning off security in the operating system is not recommended.

Microsoft provides a writeup on how to import a Linux image which could be configured with the LAMP stack. It is important to note that the article assumes that you are starting with a VHD file or have converted your existing file format into VHD for upload. The writeup also assumes that you have the Azure Command Line or PowerShell configured and installed which again assumes that you are starting with a Windows desktop to do the heavy lifting. There are few blogs that detail how to do this. Most recommend using Bitnami to load images or load a Linux image and rebuild the instance in Azure rather than building one of your own and uploading it. Most of the blog entries talk about having the para-virtual drivers installed to enable HyperV to properly work once the image is running.

Amazon provides a good documentation and tutorials on importing images from VMWare, Windows, and other formats. Their tools allow for either command line import or web based imports and converts them to Amazon AMI format. It is important to note that once you have imported the images you do need to reconfigure networking and security as must be done with almost all of the other cloud vendor solutions. The only true exception to this is Ravello Systems which is available in the Amazon, Google, Azure, and the Oracle Cloud. Ravello allows you to import your VMWare images unchanged and run them in all four cloud providers. Note that this is different from converting it to a runnable image in the cloud provider format. Ravello uses a hypervisor emulator that uploads a shim to translate VMWare calls into cloud provider calls and interfaces.

In summary, all of the cloud providers allow you to import existing in house images into their cloud. All use a format translator to translate the original format into the cloud provider format. The single exception is using the Ravello import utility that takes VMWare format and imports it unchanged and runs it in the cloud provider of your choice. The key difference between the different import mechanisms are what tools are needed. Do you need to start with a Windows desktop and use PowerShell to import the image? Do you need to install a command line utility that runs on most operating systems and convert your image to a custom image? Can you upload and download the image from the cloud provider to maintain version control in your data center? Does the cloud provider have the ability to run these images in your data center as if they were cloud services on a machine under your security umbrella? The more we dive into topics the less we seem to get answers but more and more questions. Hopefully today's blog gives you more insight into running existing applications on different cloud vendor platforms and what control you give up, what options you have, and what it takes to import virtual images from your data center into a cloud solution.

Generate Azure VM with Resource Manager deployment in PowerShell

Yann Neuhaus - Wed, 2016-08-17 09:04

Recently, there is a new way to manage the Azure infrastructure with Resource Manager. It brings many advantages regarding the classic deployment.
The differences between these two deployments will not be covered in this blog because it is not the initial goal, and it already exists a very good Microsoft topic on this subject.

In this blog, we will generate a new Windows Azure Virtual Machine using Resource Manager deployment with PowerShell from On-Premise.

Remember, only RM object can be listed with RM cmdlets! On the contrary, only Classic object can be listed with Classic cmdlets!

We can connect automatically to Azure Account with this command:
Select-AzureRmProfile -Path "C:\temp\AzureCert.json"

But to download this certificate, we need to connect manually to Azure Account at least once as follows:
Add-AzureRmAccount -SubscriptionId "<YourSubscriptionID>"

Enter your personal credentials and then run the following command:
Save-AzureRmProfile -Path "C:\temp\AzureCert.json"

If you want to navigate through your different attached Azure Subscriptions, use the cmdlets Get-AzureRmSubscription/Set-AzureRmSubcription.

To obtain the different existing Azure Locations:
Get-AzureRmLocation | Select DisplayName

For the end of this blog, we will work in this specific Azure Location:
$location = "West Europe"

Hardware Profile

To list all different available Resource Group:
Get-AzureRmResourceGroup | Select ResourceGroupName, Location

And select your specific Azure Resource Group:
$resourceGroupName = (Get-AzureRmResourceGroup).ResourceGroupName[0]

To choose the correct VM size, list all available Azure formats:
Get-AzureRmVMSize -location $location | Select Name, NumberOfCores, MemoryInMB
$vmSize = "Standard_A3"

And initialize the VM object to build:
$vm = New-AzureRMVMConfig -Name $vmname -VMSize $vmsize

Image Profile

Now we want to select a specific image available from a publisher in Azure. In this case, we will choose the last SQL Server 2016 Enterprise edition ISO.
The different steps will describe the method to find out all the elements to select the correct available image.

Select all publishers from a specific Azure Location:
Get-AzureRmVMImagePublisher -Location $location | Select PublisherName
$publisher = "MicrosoftSQLServer"

Now select all offers from a specific Azure Publisher:
Get-AzureRmVMImageOffer -Location $location -PublisherName $publisher | Select Offer
$offer = "SQL2016-WS2012R2"

Then select all Skus from a specific Azure Offer:
Get-AzureRmVMImageSku -Location $location -PublisherName $publisher -Offer $offer | Select Skus
$skus = "Enterprise"

Finally choose your version:
(Get-AzureRmVMImage -Location $location -PublisherName $publisher -Offer $publisher -Skus $skus).version

To obtain the last version of the image:
$Version = (Get-AzureRmVMImage -Location $location -PublisherName $publisher -Offer $offer -Skus $skus | sort -Descending).version[0]

Add the image profile to the existing VM object:
$vm = Set-AzureRmVMSourceImage -VM $vm -PublisherName $publisher -Offer $offer -Skus $skus -Version $version

OS Profile

According to the Image Profile, the Virtual Machine will be a Windows Server. So enter the specifications as follows:
$username = "dbi"
$password = ConvertTo-SecureString "B3stPa$$w0rd3v3r" -AsPlainText –Force
$cred = New-Object System.Management.Automation.PSCredential ($username, $password)
$vm = Set-AzureRmVMOperatingSystem -VM $VM -ComputerName "Artanis" -Windows -Credential $cred -ProvisionVMAgent

Disk Profile

As the VM will be created from an Azure Image, we need to specify a location and a name for the OS disk.

To list all your available Azure Storage Accounts, run this command:
Get-AzureRmStorageAccount | Select StorageAccountName, Location

To list the different containers available in your Azure Storage:
(Get-AzureRmStorageAccount | Get-AzureStorageContainer).CloudBlobContainer

And now add a disk profile to the existing VM:
$diskLocation = "https://<accountStorageName>.blob.core.windows.net/vhds/"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name "artanisVHDOS.vhd" -VhdUri ($diskLocation+"artanisVHDOS.vhd") -CreateOption FromImage

IP Profile

Here is an example of Network configuration:
$subnet = New-AzureRmVirtualNetworkSubnetConfig -Name "CloudSubnet" -AddressPrefix "10.0.64.0/24"
$ain = New-AzureRmVirtualNetwork -Name "VirtualNetwork" -ResourceGroupName $resourceGroupName -Location $location -AddressPrefix "10.0.0.0/16" -Subnet $subnet
$pip = New-AzureRmPublicIpAddress -Name "AzurePublicIP" -ResourceGroupName $resourceGroupName -AllocationMethod Dynamic -Location $location
$nic = New-AzureRMNetworkInterface -Name "AzureNetInterface" -ResourceGroupName $resourceGroupName -Location $location SubnetId $ain.Subnets[0].Id -PublicIpAddressId $pip.Id

Conclusion: VM generation

Now we have entered all different profiles required to generate a new Windows Azure VM:
$azurevm = New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $location -VM $vm

Use “Get-AzureRmVM” cmdlet to list all available VMs.

To download the remote desktop file to connect to this new virtual machine, use the following command:
Get-AzureRmRemoteDesktopFile -ResourceGroupName $resourceGroupName -Name $vmName -LocalPath "C:\Temp\Artanis.rdp"

With all these commands, you can realize how simple it is to automate the generation of a new Virtual Machine in Azure. Moreover you should probably have noticed the construction of the VM object (with the different profiles) is similar to Hyper-V structure.

I hope it helps you ;-)

 

Cet article Generate Azure VM with Resource Manager deployment in PowerShell est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator