Feed aggregator

Table Statistics Get Null or Empty

Tom Kyte - Fri, 2018-01-19 01:26
What could be the possible reason(s) why the table statistics (DBA_TAB_STATISTICS) got null or empty i.e. num_rows, last_analyzed columns? At certain date <b>DBA_TAB_STATISTICS</b> num_rows, last_analyzed columns have values (not empty/null), and...
Categories: DBA Blogs

Google friendly application URLs in APEX

Tom Kyte - Fri, 2018-01-19 01:26
Hi Now a days, there had been lots of discussion about having google friendly url for a web application. I have gone thru many links, but still confused. My question is - if I create a public application with Oracle Apex, could google able to ...
Categories: DBA Blogs

Analytical Function to compute running daily overtime by week

Tom Kyte - Fri, 2018-01-19 01:26
Hi there, I may be overthinking this query and would really appreciate some input/kick in the pants. How in the heck do I get daily overtime based on a 40 hour work week? If the week isn't complete but an employee has racked up more than 40 hours...
Categories: DBA Blogs

Reading image file

Tom Kyte - Fri, 2018-01-19 01:26
Hello MR.TOM please i have answer <b>Reading image file to custom table and export the image into a new file. Let's say we have a file called 'horse.jpg' and upload it to a table. After that you wat to export it to a new file horse1.jpg...
Categories: DBA Blogs

Should I use SQL or Python?

Bobby Durrett's DBA Blog - Thu, 2018-01-18 13:57

We had an outage on an important application last Thursday. A particular SQL statement locked up our database with library cache: mutex X waits. I worked with Oracle support to find a bug that caused the issue and we came up with a good workaround. The bug caused a bunch of shared cursor entries. So, I wanted to run a test on a test database to recreate the excess shared cursor entries. I wanted to run the SQL query that caused the outage a bunch of times. Also, we embed the SQL query inside a PL/SQL procedure so I wanted to run the query by calling the procedure. So, I needed to come up with a bunch of calls to the procedure using realistic data as a test script. This blog post is about the decision I had to make about creating the test script. Would I use SQL or Python to quickly hack together my test script? I thought it would be interesting to write about my choice because I am working on my Python for the Oracle DBA talk that encourages Oracle DBAs to learn Python. In this situation I turned to SQL instead of Python so what does that say about the value of Python for Oracle DBAs?

Let me lay out the problem that I needed to solve. Note that I was trying to get this done quickly and not spend a lot of time coming up with the perfect way to do it. I had over 6000 sets of bind variable values that the problem query has used in the past. I used my bind2.sql script to get some sample bind variable values for the problem query. The output of bind2.sql was in this format:

2017-11-27 15:08:56 :B1 1
2017-11-27 15:08:56 :B2 ABC
2017-11-27 15:08:56 :B3 JAFSDFSF
2017-11-27 15:08:56 :B4 345
2017-11-27 15:08:56 :B5 6345
2017-11-27 15:08:56 :B6 10456775
2017-11-27 15:08:56 :B7 34563465
2017-11-27 15:08:56 :B8 433
2017-11-27 15:09:58 :B1 1
2017-11-27 15:09:58 :B2 JUL
2017-11-27 15:09:58 :B3 KSFJSDJF
2017-11-27 15:09:58 :B4 234
2017-11-27 15:09:58 :B5 234253
2017-11-27 15:09:58 :B6 245
2017-11-27 15:09:58 :B7 66546
2017-11-27 15:09:58 :B8 657576
2017-11-27 15:10:12 :B1 1
2017-11-27 15:10:12 :B2 NULL
2017-11-27 15:10:12 :B3 NULL
2017-11-27 15:10:12 :B4 45646
2017-11-27 15:10:12 :B5 43
2017-11-27 15:10:12 :B6 3477
2017-11-27 15:10:12 :B7 6446
2017-11-27 15:10:12 :B8 474747

I needed to convert it to look like this:

exec myproc(34563465,10456775,345,433,6345,'JAFSDFSF','ABC',1,rc);
exec myproc(66546,245,234,657576,234253,'KSFJSDJF','JUL',1,rc);
exec myproc(6446,3477,45646,474747,43,'NULL','NULL',1,rc);

I gave myself maybe a minute or two to decide between using SQL or Python. I choose SQL. All I did was insert the data into a table and then manipulate it using SQL statements. Note that the order of the arguments in the procedure call is not the same as the order of the bind variable numbers. Also, some are character and some are number types.

Here is the SQL that I used:

drop table bindvars;

create table bindvars
(datetime varchar2(20),
 varname varchar2(2),
 varvalue varchar2(40));

insert into bindvars values ('2017-11-27 15:08:56','B1','1');
insert into bindvars values ('2017-11-27 15:08:56','B2','ABC');
insert into bindvars values ('2017-11-27 15:08:56','B3','JAFSDFSF');
insert into bindvars values ('2017-11-27 15:08:56','B4','345');
insert into bindvars values ('2017-11-27 15:08:56','B5','6345');
insert into bindvars values ('2017-11-27 15:08:56','B6','10456775');
insert into bindvars values ('2017-11-27 15:08:56','B7','34563465');
insert into bindvars values ('2017-11-27 15:08:56','B8','433');
insert into bindvars values ('2017-11-27 15:09:58','B1','1');
insert into bindvars values ('2017-11-27 15:09:58','B2','JUL');
insert into bindvars values ('2017-11-27 15:09:58','B3','KSFJSDJF');
insert into bindvars values ('2017-11-27 15:09:58','B4','234');
insert into bindvars values ('2017-11-27 15:09:58','B5','234253');
insert into bindvars values ('2017-11-27 15:09:58','B6','245');
insert into bindvars values ('2017-11-27 15:09:58','B7','66546');
insert into bindvars values ('2017-11-27 15:09:58','B8','657576');
insert into bindvars values ('2017-11-27 15:10:12','B1','1');
insert into bindvars values ('2017-11-27 15:10:12','B2','NULL');
insert into bindvars values ('2017-11-27 15:10:12','B3','NULL');
insert into bindvars values ('2017-11-27 15:10:12','B4','45646');
insert into bindvars values ('2017-11-27 15:10:12','B5','43');
insert into bindvars values ('2017-11-27 15:10:12','B6','3477');
insert into bindvars values ('2017-11-27 15:10:12','B7','6446');
insert into bindvars values ('2017-11-27 15:10:12','B8','474747');

commit;

drop table bindvars2;

create table bindvars2 as
select 
b1.varvalue b1,
b2.varvalue b2,
b3.varvalue b3,
b4.varvalue b4,
b5.varvalue b5,
b6.varvalue b6,
b7.varvalue b7,
b8.varvalue b8
from 
bindvars b1,
bindvars b2,
bindvars b3,
bindvars b4,
bindvars b5,
bindvars b6,
bindvars b7,
bindvars b8
where
b1.datetime = b2.datetime and
b1.datetime = b3.datetime and
b1.datetime = b4.datetime and
b1.datetime = b5.datetime and
b1.datetime = b6.datetime and
b1.datetime = b7.datetime and
b1.datetime = b8.datetime and
b1.varname = 'B1' and
b2.varname = 'B2' and
b3.varname = 'B3' and
b4.varname = 'B4' and
b5.varname = 'B5' and
b6.varname = 'B6' and
b7.varname = 'B7' and
b8.varname = 'B8';

select 'exec myproc('||
B7||','||
B6||','||
B4||','||
B8||','||
B5||','''||
B3||''','''||
B2||''','||
B1||',rc);'
from bindvars2;

I hacked the insert statements together with my Textpad text editor and then loaded the rows into a little table. Then I built a new table which combines the 8 rows for each call into a single row with a column for each bind variable. Finally I queried the second table generating the procedure calls with single quotes, commas and other characters all in the right place.

Now that the rush is past and my testing is done I thought I would hack together a quick Python script to do the same thing. If I had chosen Python how would have I done it without spending a lot of time making it optimal? Here is what I came up with:

Instead of insert statements I pulled the data into a multi-line string constant. Then I split it into a list of strings with each string representing a line. Then I split each line into space delimited strings so each line would have date,time,bind variable name, bind variable value. Finally I looped through each set of 8 lines extracting the bind variable values and then printing the bind variables in the correct order and format.

These are two quickly hacked together solutions. I think the key point is how I stored the data. With SQL I used tables. With Python I used lists. I’m not sure which I like better in this case. I’ve been doing SQL longer but Python wasn’t really harder. I guess my decision under pressure to use SQL shows that I still have more comfort with the SQL way of doing things, but my after the fact Python hacking shows that the Python solution was not any harder. FWIW.

Bobby

 

Categories: DBA Blogs

Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again!

Yann Neuhaus - Thu, 2018-01-18 13:46

Today is the second day of the Alfresco DevCon 2018 and therefore yes, it is already over, unfortunately. In this blog, I will be continuing my previous one with sessions I attended on the afternoon of the day-1 as well as day-2. There were too many interesting sessions and I don’t really have the time to talk about all of them… But if you are interested, all the sessions were recorded (as always) so wait a little bit and check out the DevCon website, the Alfresco Community or the Alfresco Youtube channel and I’m sure you will find all the recordings as soon as they are available.

 

So on the afternoon of the day-1, I started with a presentation of Jeff Potts, you all know him, and he was talking about how to move in (upload) and out (download) of Alfresco some gigantic files (several gigabytes). He basically presented a use case where the users had to manage big files and put them all in Alfresco with the less headache possible. On the paper, Alfresco can handle any file no matter the size because the only limit is what the File System of the Alfresco Server supports. However, when you start working with 10 or 20 GB files, you can sometimes have issues like exceptions, timeouts, network outage, aso… It might not be frequent but it can happen for a variety of reasons (not always linked to Alfresco). The use case here was to simplify the import into Alfresco and make it faster. Jeff tested a lot of possible solutions like using the Desktop Sync, CMIS, FTP, the resumable upload share add-on, aso…

In the end, a pure simple (1 stream) upload/download will always be limited by the network. So he tried to work on improving this part and used the Resilio Sync software (formerly BitTorrent Sync). This tool can be used to stream a file to the Alfresco Server, BitTorrent style (P2P). But the main problem of this solution is that P2P is only as good as the number of users having this specific file available on their workstation… Depending on the use case, it might increase the performance but it wasn’t ideal.

In the end, Jeff came across the protocol “GridFTP”. This is an extension of the FTP for grid computing whose purpose is to make the file transfer more reliable and faster using multiple simultaneous TCP streams. There are several implementations of the GridFTP like the Globus Toolkit. Basically, the solution in this case was to use Globus to transfer the big files from the user’s workstation to a dedicated File System which is mounted on the Alfresco Server. Then using the Alfresco Bulk FileSystem Import Tool (BFSIT), it is really fast to import documents into Alfresco, as soon as they are on the File System of the Alfresco Server. For the download, it is just the opposite (using the BFSET)…

For files smaller than 512Mb, this solution is probably slower than the default Alfresco upload/download actions but for bigger files (or group of files), then it becomes very interesting. Jeff did some tests and basically for one or several files with a total size of 3 or 4GB, then the transfer using Globus and then the import into Alfresco was 50 to 60% faster than the Alfresco default upload/download.

 

Later, Jose Portillo shared Solr Sharding Best Practices. Sharding is the action of splitting your indexes into Shards (part of an index) to increase the searches and indexing (horizontal scaling). The Shards can be stored on a single Solr Server or they can be dispatched on several. Doing this basically increase the search speed because the search is executed on all Shards. For the indexing of a single node, there is no big difference but for a full reindex, it does increase a lot the performance because you do index several nodes at the same time on each Shards…

A single Shard can work well (according to the Alfresco Benchmark) with up to 50M documents. Therefore, using Shards is mainly for big repositories but it doesn’t mean that there are no use cases where it would be interesting for smaller repositories, there are! If you want to increase your search/index performance, then start creating Shards much earlier.

For the Solr Sharding, there are two registration options:

  • Manual Sharding => You need to manually configure the IPs/Host where the Shards are located in the Alfresco properties files
  • Dynamic Sharding => Easier to setup and Alfresco automatically provide information regarding the Shards on the Admin interface for easy management

There are several methods of Shardings which are summarized here:

  • MOD_ACL_ID (ACL v1) => Sharding based on ACL. If all documents have the same ACL (same site for example), then they will all be on the same Shard, which might not be very useful…
  • ACL_ID (ACL v2) => Same as v1 except that it uses the murmur hash of the ACL ID and not its modulus
  • DB_ID (DB ID) => Default in Solr6. Nodes are evenly distributed on the Shards based on their DB ID
  • DB_ID_RANGE (DB ID Range) => You can define the DB ID range for which nodes will go to which Shard (E.g.: 1 to 10M => Shard-0 / 10M to 20M => Shard-1 / aso…)
  • DATE (Date and Time) => Assign date for each Shards based on the month. It is possible to group some months together and assign a group per Shard
  • PROPERTY (Metadata) => The value of some property is hashed and this hash is used for the assignment to a Shard so all nodes with the same value are in the same Shard
  • EXPLICIT (?) => This is an all-new method that isn’t yet on the documentation… Since there aren’t any information about this except on the source code, I asked Jose to provide me some information about what this is doing. He’ll look at the source code and I will update this blog post as soon as I receive some information!

Unfortunately, the Solr Sharding has only been available starting with Alfresco Content Services 5.1 (Solr 4) and only using the ACL v1 method. New methods were then added using the Alfresco Search Services (Solr 6). The availability of methods VS Alfresco/Solr versions has been summarized in Jose’s presentation:

DevCon2018_ShardingMethodsAvailability

Jose also shared a comparison matrix of the different methods to choose the right one for each use case:

DevCon2018_ShardingMethodsFeatures

Some other best practices regarding the Solr Sharding:

  • Replicate the Shards to increased response time and it also provides High Availability so… No reasons not to!
  • Backup the Shards using the provided Web Service so Alfresco can do it for you for one or several Shards
  • Use DB_ID_RANGE if you want to be able to add Shards without having to perform a full reindex, this is the only way
  • If you need another method than DB_ID_RANGE, then plan carefully the number of Shards to be created. You might want to overshard to take into account the future growth
  • Keep in mind that each Shard will pull the changes from Alfresco every 15s and it all goes to the DB… It might create some load there and therefore be sure that your DB can handle that
  • As far as I know, at the moment, the Sharding does not support Solr in SSL. Solr should anyway be protected from external accesses because it is only used by Alfresco internally so this is an ugly point so far but it’s not too bad. Sharding is pretty new so it will probably support the SSL at some point in the future
  • Tune properly Solr and don’t forget the Application Server request header size
    • Solr4 => Tomcat => maxHttpHeaderSize=…
    • Solr6 => Jetty => solr.jetty.request.header.size=…

 

The day-2 started with a session from John Newton which presented the impact of emerging technologies on content. As usual, John’s presentation had a funny theme incorporated in the slides and this time it was Star Wars.

DevCon2018_StarWars

 

After that, I attended the Hack-a-thon showcase, presented/introduced by Axel Faust. In the Alfresco world, Hack-a-thons are:

  • There since 2012
  • Open-minded and all about collaboration. Therefore, the output of any project is open source and available for the community. It’s not about money!
  • Always the source of great add-ons and ideas
  • 2 times per year
    • During conferences (day-0)
    • Virtual Hack-a-thon (36h ‘follow-the-sun’ principle)

A few of the 16 teams that participated in the Hack-a-thon presented the result of their Hack-a-thon day and there were really interesting results for ACS, ACS on AWS, APS, aso…

Except that, I also attended all lightning talks on this day-2 as well as presentations on PostgreSQL and Solr HA/Backup solutions and best practices. The presentations about PostgreSQL and Solr were interesting especially for newcomers because it really explained what should be done to have a highly available and resilient Alfresco environment.

 

There were too many lightning talk to mention them all but as always, there were some quite interesting and there I just need to mention the talk about the ContentCraft plugin (from Roy Wetherall). There cannot be an Alfresco event (be it a Virtual Hack-a-thon, BeeCon or DevCon now) without an Alfresco integration into Minecraft. Every year, Roy keeps adding new stuff into his plugin… I remember years ago, Roy was already able to create a building in Minecraft where the height represented the number of folders stored in Alfresco and the depth was the number of documents inside, if my memory is correct (this changed now, it represents the number of sub-folders). This year, Roy presented the new version and it’s even more incredible! Now if you are in front of one of the building’s door, you can see the name and creator of the folder in a ‘Minecraft sign’. Then you can walk in the building and there is a corridor. On both sides, there are rooms which represent the sub-folders. Again, there are ‘Minecraft signs’ there with the name and creator of the sub-folders. Until then, it’s just the same thing again so that’s cool but it will get even better!

If you walk in a room, you will see ‘Minecraft bookshelves’ and ‘Minecraft chests’. Bookshelves are just there for the decoration but if you open the chests, then you will see, represented by ‘Minecraft books’, all your Alfresco documents stored on this sub-folder! Then if you open a book, you will see the content of this Alfresco document! And even crazier, if you update the content of the book on Minecraft and save it, the document stored in Alfresco will reflect this change! This is way too funny :D.

It’s all done using CMIS so there is nothing magical… Yet it really makes you wonder if there are any limits to what Alfresco can do ;).

 

If I dare to say: long live Alfresco! And see you around again for the next DevCon.

 

 

Cet article Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again! est apparu en premier sur Blog dbi services.

Critical Patch Update for January 2018 Now Available

Steven Chan - Thu, 2018-01-18 11:37

The Critical Patch Update (CPU) for January 2018 was released on January 16, 2018. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • April 17, 2018
  • July 17, 2018
  • October 16, 2018
  • January 15, 2019
References Related Articles
Categories: APPS Blogs

Upgrading GoldenGate Microservices Architecture – GUI Based

DBASolved - Thu, 2018-01-18 10:45

In August of 2017, Oracle released two architectures for Oracle GoldenGate. These architectures were the Classic Architecture and the Microservices Architecture. Since then there has been some discussion around upgrading Oracle GoldenGate to Microservices. Due to the change in architecture, there is no direct upgrade path from Classic Architecture to Microservices Architecture. If you want to use the new Microservices Architecture, you will have to do a fresh install and migrate to the architecture.

Let’s say that you are bold and forward thinking and have already made move to the Oracle GoldenGate Microservices Architecture upon the release of 12.3.0.1.0 in August 2017 … great and I’m happy you did! It is a really cool architecture to be on and will provide you a stepping stone into the cloud world. Now what to do when Oracle releases a new release of Microservices Architecture? … The answer is upgrade!

Now upgrading Oracle GoldenGate Microservices Architecture is not a hard as it has been in the past with Oracle GoldenGate. It has actually gotten simpler to upgrade … all you have to do now is install a new set of binaries and switch the deployment home for your deployments and ServiceManager over to it.

To perform the upgrade simply follow these steps:

1. Login and verfiy what Oracle GoldenGate Home the ServiceManager is using
a. Login to ServiceManager (http(s)://<hostname>:<port>)

b. Review the Deployment section (bottom of page)

2. Install new Oracle GoldenGate for Microservices binaries next to existing binaries. (Technically an out of place upgrade)

3. Update the ServiceManager and/or Deployments with new Oracle GoldenGate Home Information
a. Click the ServiceManager or Deployment (hyperlink)

b. Click the pencil icon. This will open the dialog box for editing

c. Update the GoldenGate Home with the new Oracle GoldenGate Home Path
d. Click Apply

f. From the Overview page, use the Action button to restart the ServiceManager out of the new Oracle GoldenGate home

After hitting “restart” from the Action button, you will lose access to ServiceManager. This is due to the old ServiceManager being shutdown and the new ServiceManager being started out of the new Oracle GoldenGate Home. Once the new ServiceManager is up and running, the same steps can be followed to move your deployment homes to the new Oracle GoldenGate Home.

Note: Ensure you stop all extract/replicat processes within the Deployment Home before moving GoldenGate Home of a deployment.

Enjoy!!!

Categories: DBA Blogs

Column Stats

Jonathan Lewis - Thu, 2018-01-18 08:22

I’ve made several comments in the past about the need for being selective when gathering objects statistics with particular reference to the trade-offs when creating histograms. With Oracle 12c it’s now reasonably safe (as far as I’m concerned) to set a method_opt as a table preference that identifies columns where you expect to see Frequency or (pace the buggy behaviour described in a recent post) a Top-N histograms. The biggest problem I have is that I keep forgetting the exact syntax I need – so I’ve written this note more as a reminder to myself than anything else.

Typically I might expect to use the standard 254 columns for gathering histograms, with an occasional variation to increase the bucket count; but for the purposes of this note I’m going to demonstarate with a much lower value. So here’s a table creation statement (running 12.1.0.2 – so it will gather basic stats on the create) and two variations of a call to gather stats with a specific method_opt – followed by a question:

create table t1
as
select
        object_type o1,
        object_type o2,
        object_type o3,
        object_id,
        object_name
from
        all_objects
where
        rownum <= 50000 -- > comment to bypass wordpress format problem
;

select  column_name, num_distinct, histogram, num_buckets, to_char(last_analyzed,'hh24:mi:ss')
from    user_tab_cols where table_name = 'T1' order by column_id;

execute dbms_lock.sleep(2)

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt=>'for all columns size 1 for columns o1 o2 o3 size 15'
        );
end;
/

select  column_name, num_distinct, histogram, num_buckets, to_char(last_analyzed,'hh24:mi:ss')
from    user_tab_cols where table_name = 'T1' order by column_id;

execute dbms_lock.sleep(2)

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt=>'for all columns size 1 for columns size 15 o1 o2 o3'
        );
end;
/

select  column_name, num_distinct, histogram, num_buckets, to_char(last_analyzed,'hh24:mi:ss')
from    user_tab_cols where table_name = 'T1';


The big question is this: which columns will have histograms after each of the gather_table_stats() calls:

method_opt=>'for all columns size 1 for columns o1 o2 o3 size 15'
method_opt=>'for all columns size 1 for columns size 15 o1 o2 o3'

The problem I have is simple – to me both options look as if they will create histograms on all three named columns but the first option is the one that I type in “intuitively” if I don’t stop to think about it carefully. The first option, alas, will only gather a histogram on column o3 – the second option is the one that creates three histograms.

The manuals are a little unclear and ambiguous about how to construct a slightly complicated method_opt; there’s a fragment of text with the usual mix of square brackets, italics and ellipses to indicate optional and repeated clauses (interestingly the only clue about multiple columns is that comma separation seems to be required – despite one of the examples above working withough commas) but there’s no explanation of when a “size” clause should go before a “column” column and when it should go after.

So here are a few more method_opt clauses – can you work out in advance which columns would have histograms if you used them and how many buckets in each histogram; there are a couple that may surprise you:


for columns o1 size 12, o2 size 13, o3 size 14

for columns o1 size 15 o2 size 16 o3 size 17

for columns size 18 o1 size 19 o2 size 20 o3

for columns size 21 o1 o2 size 22 o3

for columns o1 size 12, o2 size 12, o3 size 13, object_id size 13 object_name size 14

for columns size 22 o1 o2 for columns size 23 o3 object_id for columns size 24  object_name

Bottom line – to me – is to check very carefully that the method_opt is going to do what I want it to do; and for production systems I tend to use the final form that repeats the “for columns {size clause} {column list}”.

Error importing a DMP file to a very different database

Tom Kyte - Thu, 2018-01-18 07:06
I created a completely new oracle database and I am trying to import a DMP file from a full backup of another database and I am getting several errors. Can you help me: <b>Command:</b> impdp system/welcome1 full=yes directory=BACKUPSDR dumpfile=bc...
Categories: DBA Blogs

Making a URL call in PL/SQL

Tom Kyte - Thu, 2018-01-18 07:06
Hi Tom, I want to open Apex Application in New Window or New Tab through Oracle EBS Home page. I have tried with different method and finally came up with solution based on below link https://asktom.oracle.com/pls/apex/asktom.search?tag=making...
Categories: DBA Blogs

Generating alphabetical sequence like a spreadsheet

Tom Kyte - Thu, 2018-01-18 07:06
Is there a way to generate an alphabetical iterator like the column headings in a spreadsheet? i.e. A...Z and then AA,AB,AC,...,AZ,BA,BB,BC,...,BZ and so on. Googling leads to http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server-programming/73630/G...
Categories: DBA Blogs

Partner Webcast – Oracle Autonomous Data Warehouse Cloud Service

The Oracle Autonomous Data Warehouse Cloud Service is the first service announced by Oracle that leverages the Oracle Autonomous Database technology, using artificial intelligence to deliver...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle MOOC: Soar higher with Oracle JavaScript Extension Toolkit (JET) 4.0 (2018)

Oracle JavaScript Extension Toolkit (JET) empowers developers by providing a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles. Oracle JET is...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Cloud - Database Services

Syed Jaffar - Thu, 2018-01-18 03:05
This blog post highlights some of the essentials of Oracle cloud Database service offerings and its advantages. Also discuss about Database deployment (DBaaS) benefits and tools that can be used to automate backup/recovery operations and maintenance tasks.

No doubt, most of the software houses are now pushing the clients towards the cloud services. Though Cloud provides several benefits, one should really understands the benefits, threats and offerings from the different cloud vendors. I am going to discuss here about Oracle Cloud Database Services offering.


Database service
  • Oracle Database Exadata cloud at customer (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the customer's DC)
  • DB Service on Bare Metal (Dedicated database instances with full administrate control)
  • Exadata Express Service
  • Oracle Database Exadata Cloud Service (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the Oracle Cloud)
  • Database Schema service (A dedicated schema with a complete development and deployment platform managed by Oracle)
 Provides :

  • Rapid provisioning to use in minutes
  • Grow as your business grow
  • Provides tight security to protect the data
  • Off-loads your day-to-day maintenance work

Database deployment (earlier known as DBaaS): is a compute environment which provides:
  • A Linux VM
  • Oracle software
  • A per-created database
  • Cloud tools 
  • for automated and on-demand backup and recovery, automated patching and upgrades, web monitoring tool etc.




Patching a deployment database:
  • Use UI Cloud Interface from Oracle cloud DB Service console or use the command line utility : dbaascli
Backup and Recovery of deployment database:
  • bkup_api Automated backup service level
  • dbaascli Automated recovery service level 
For more information about Features, Overview and pricing of Oracle Cloud Database services, visit : https://cloud.oracle.com/en_US/database

I will be blogging about each topic separately in the coming posts.

Stay tuned.


Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF!

Yann Neuhaus - Wed, 2018-01-17 13:30

Here we are, the Alfresco DevCon 2018 day-1 is over (well except for the social party)! It’s been already 2 years I attended my last Alfresco event (BeeCon 2016, first of its name (organized by the Order of the Bee)) because I wasn’t able to attend the second BeeCon (2017) since it happened on the exact dates of our internal dbi xChange event. Yesterday was the DevCon 2018 day-0 with the Hackathon, the full day training and the ACSCE/APSCE Certification preparation but today was really the first day of sessions.

DevCon2018_Logo

 

The day-1 started, as usual, with a Keynote from Thomas DeMeo which presented interesting information regarding the global direction of Alfresco products, the Roadmap (of course) for the coming year as well as some use cases where Alfresco was successfully used in very interesting projects including also AWS.

DevCon2018_Roadmap

 

The second part of the keynote has been presented by Brian Remmington which explained the future of the Alfresco Digital Platform. In the next coming months/years, Alfresco will include/refactor/work on the following points for its Digital Platform:

  • Improve SSO solutions. Kerberos is already working very well with Alfresco but they intend to also add SAML2, OAuth, aso… This is a very good thing!
  • Merging the Identity management for the ACS and APS into one single unit
  • Adding an API Gateway in front of ACS and APS to always talk to the same component and targeting in the background both the ACS and APS. It will also allow Alfresco to change the backend APIs, if needed (to align them for example), without the API Gateway noticing it. This is a very good thing too from a developer perspective since you will be sure that your code will not break if Alfresco rename something for example!
  • Merging the search on the ACS and APS into one single search index
  • We already knew it but it was confirmed that Alfresco [will probably drop the default installer and instead] will provide docker/kubernetes means for you to deploy Alfresco easily and quickly using these new technologies
  • Finishing the merge/refactor of other ACS/APS services into common units for both products so that work done once isn’t duplicated. This will concern the Search (Insight?) Service, the Transformation Service, the Form Service and a new Function Service (basically code functions shared between ACS and APS!).

All this looks promising, like really.

 

Then starting at 10am, there were four streams running in parallel so there is something that you will find interesting, that’s for sure. I didn’t mention it but DevCon isn’t just a name… It means that the sessions are really technical, we are far from the (boring) Business presentations that you can find on all other competitors’ events… I did a full morning on ADF. Mario Romano and Ole Hejlskov were presenting ADF Basics and Beyond.

For those of you who don’t know yet, ADF (Alfresco Development Framework) is the last piece of the Digital Platform that Alfresco has been bringing recently. It is a very interesting new framework that allows you to create your own UI to use in front of the ACS/APS. There are at the moment more than 100 angular components that you can use, extend, compose and configure to build the UI that will match your use case. Alfresco Share still provide way more features than ADF but I must say that I’m pretty impressed by what you can achieve in ADF with very little: it looks like it is going in the right direction.

ADF 2.0 has been released recently (November 2017) and it is based on three main pillars: the latest version of Angular 5, a powerful JavaScript API (that talk in the background with the ACS/APS/AGS APIs) and the Yeoman generator+Angular CLI for fast deployments.

ADF provides 3 extensions points for you to customize a component:

  • html extension points => adding html to customize the look&feel or the behavior
  • event listeners => adding behaviors on events for example
  • config properties => each component has properties that will customize it

One of the goal of ADF is the ability to upgrade your application without any efforts. Angular components will be updated in the future but it was designed (and Alfresco effort is going) in a way that even if you use these components in your ADF application, then an upgrade of your application won’t hurt at all. If you want to lean more about ADF, then I suggest you the Alfresco Tech Talk Live that took place in December as well as the Alfresco Office Hours.

 

After this first introduction session to ADF, Eugenio Romano went deeper and showed how to play with ADF 2.0: installing it, deploying a first application and then customizing the main pieces like the theme, the document list, the actions, the search, aso… There were some really interesting examples and I’m really looking forward seeing the tutorials and documentations popping up on the Alfresco Website about these ADF 2.0 new features and components.

 

To conclude the morning, Denys Vuika presented a session about how to use and Extend the Alfresco Content App (ACA). The ACA is the new ADF 2.0 application provided by Alfresco. It is a demo/sample application whose purpose is to be lightweight so it is as fast as possible. You can then customize it as you want, play with the ADF so that this sample application match your needs. One of the demo Denys presented is how you can change the default previewer for certain type of files (.txt, .js, .xml for example). In ADF, that’s like 5 lines of code (of course you need to have another previewer of your own but that’s not ADF stuff) and then he had an awesome preview for .js files where there were syntax highlighting right inside the Alfresco preview as well as tooltips on names to give description of variables and functions apparently. This kind of small features but done so easily look quite promising.

 

I already wrote a lot on ADF today so I will stop my blog here but I did attend a lot of other very interesting sessions on the afternoon. I might talk about that tomorrow.

 

 

 

Cet article Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF! est apparu en premier sur Blog dbi services.

Nested grouping fails to return unique values

Tom Kyte - Wed, 2018-01-17 12:46
Hi Tom I'm using the following environment (which I believe is relevant because a similar query broke after migrating from 11 to 12c): <code>Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production PL/SQL Release 12.1.0.2.0 -...
Categories: DBA Blogs

Finding records certain time apart

Tom Kyte - Wed, 2018-01-17 12:46
Hi, I have a following table: <code>create table t ( user_id varchar2(10), clinic varchar2(50), visit_dt date ); INSERT INTO T VALUES ( 012, 'oncology', TO_DATE( '08-APR-2008') ); INSERT INTO T VALUES ( 012, 'oncology', TO_DATE( '21-APR-2008'...
Categories: DBA Blogs

Replacement for WM_CONCAT function to remove duplicate entries

Tom Kyte - Wed, 2018-01-17 12:46
We are upgrading oracle from 11G to 12c and in one of our code we are using WM_CONCAT function. Below is the functionality: Concatenate the field values and remove duplicate entries for the same. This requirement is on multiple fields. We need the...
Categories: DBA Blogs

How to Code for Parallel Processing

Tom Kyte - Wed, 2018-01-17 12:46
Tom, I have a table with 2.8 million rows, and one of the columns is a BLOB b/c this table holds binary attachments. I need to convert these BLOBS to their plain text equivalent to index the file contents in a system external to Oracle. I am s...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator