Feed aggregator

Oracle LENGTH2, LENGTH4, LENGTHB, and LENGTHC Function with Examples

Complete IT Professional - Thu, 2016-12-29 05:00
In this article, we’ll look at the variations of the Oracle LENGTH function – LENGTH2, LENGTH4, LENGTHB, and LENGTHC. Purpose of the Oracle LENGTH2, LENGTH4, LENGTHB, and LENGTHC Function The purpose of these LENGTH function variants is the same as the basic LENGTH function – to find the length of a specified string. However, the […]
Categories: Development

Links for 2016-12-27 [del.icio.us]

Categories: DBA Blogs

GNW05 – Extending Databases with Hadoop video (plus GNW06 dates)

Tanel Poder - Tue, 2016-12-27 18:02

In case you missed this webinar, here’s a 1.5h holiday video about how Gluent “turbocharges” your databases with the power of Hadoop – all this without rewriting your applications :-)

Also, you can already sign up for the next webinar here:

  • GNW06 – Modernizing Enterprise Data Architecture with Gluent, Cloud and Hadoop
  • January 17 @ 12:00pm-1:00pm CST
  • Register here.

See you soon!

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Monitoring long running operations

DBA Scripts and Articles - Tue, 2016-12-27 11:03

It can be quite useful to be able to monitor long running operations on your database, the view v$session_longops can be used to estimate the time necessary for certain operations to finish. Monitoring long running operations scripts Here is a script to monitor a RMAN backup: [crayon-5863774448cb1162851675/] Here is a sample result: [crayon-5863774448cbb212560092/] Here is … Continue reading Monitoring long running operations

The post Monitoring long running operations appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

ADF BC REST 12.2.1.2 Custom Method JDeveloper Workaround

Andrejus Baranovski - Tue, 2016-12-27 04:19
Some of you who would try to implement custom method with ADF BC REST may face JDeveloper 12.2.1.2 wizard issue. JDeveloper 12.2.1.2 wizard is refusing to register ADF BC REST custom method, but it works perfectly on ADF runtime. Seems to be JDeveloper 12.2.1.1 - 12.2.1.2 bug. There is a workaround to modify REST service configuration manually and include custom method binding.

Sample application (available on GitHub - jetcrud). This sample implements custom method in VO implementation class - testCall:


Method is exposed through client interface:


Now if you go to REST service definition and try to enable this method to be included into REST interface - JDeveloper will report error:


Something wrong happens in RSTCustomMethodTab class:


Workaround - add method call into REST service definition manually. I recommend to do it outside of JDeveloper, as it hangs. Change definition in external editor. This is the example for custom method entry:


a

If you take a look into JDeveloper wizard for REST definition, it still shows method unchecked. But you can ignore it:


To execute custom method through REST call, make sure to use POST and specify method name along with parameters in REST request body:


Make sure not to forget to provide action Content-Type:


Check section for more info - 22.13.5 Executing a Custom Action.

DOAG.tv Interviews (German)

Randolf Geist - Tue, 2016-12-27 03:00
In den letzten Wochen sind zwei Interviews veröffentlicht worden, die die DOAG mit mir im Rahmen der jährlichen DOAG Konferenz in Nürnberg durchgeführt hat.

Das erste stammt noch von der DOAG Konferenz 2015 und bezieht sich auf meinen damaligen Vortrag über die neuen Parallel Execution Features von Oracle 12c:

DOAG.tv Interview 2015

Das zweite ist von der diesjährigen DOAG Konferenz und bezieht sich auf meine Performance-Tests der Oracle Database Cloud und dem dazugehörigen Vortrag:

DOAG.tv Interview 2016

Die Interviews dauern jeweils nur wenige Minuten, gehen also nur in wenigen Stichpunkten auf die jeweiligen Themen ein.

Big Data SQL 3.1 is Now Available!

Oracle Big Data SQL 3.1 is Now Available!We are excited to announce that Oracle Big Data SQL 3.1 is now available.  Big Data SQL 3.1 is another major milestone as we continue to expand...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-12-26 [del.icio.us]

Categories: DBA Blogs

Success Story: Version Control for PL/SQL

Gerger Consulting - Mon, 2016-12-26 15:43
It’s been a little over three months since we released Gitora 2.0 and the first success stories have started to emerge. Here is one of them:


Rhenus Logistics, the leading logistics company from Germany uses Gitora to manage their Oracle Database.
Problem
Rhenus IT uses both Java and PL/SQL to serve their users and customers. They have a team of about 10 PL/SQL developers. The team manages more than 20,000 database packages, views, functions, procedures, object types and triggers spread over 30+ database schemas.
Rhenus IT wanted to move to a continuous delivery environment in which they can be more agile and deliver solutions to the business faster. Managing the PL/SQL code was the hardest piece of the puzzle.
Solution
After experimenting with other solutions in the market, Rhenus decided to move forward with Gitora.


Gitora enabled Rhenus Developers to:
  • Use Git, the prominent open source version control system used by millions of developers.
  • Move their database code between development and various staging databases automatically.
  • Move code between source and target databases very fast because Gitora only executes differences between source and target databases, without comparing the code bases in both databases first (which can be very time consuming).
  • Enforce check-in, check-out of database objects at the database level.
  • Automate build process for the database code using Gitora API’s.
  • Implement an affordable continuous delivery solution compared to alternatives.
Michiel Arentsen, the System Architect at Rhenus who implemented the solution at Rhenus has started an excellent blog in which he writes about his Gitora implementation. We highly recommend you to check it out. Below are the list of blog posts he wrote which should be very useful to anyone who is currently implementing Gitora at his/her company:
Categories: Development

PL/SQL Objects for JSON in Oracle 12cR2

Tim Hall - Mon, 2016-12-26 13:37

I’ve been playing around with some more of the new JSON features in Oracle Database 12c Release 2 (12.2).

The first thing I tried was the new PL/SQL support for the JSON functions and conditions that were introduced for SQL in 12.1. That was all pretty obvious.

Next I moved on to the new PL/SQL objects for JSON. These are essentially a replacement for APEX_JSON as far as generation and parsing of JSON data are concerned. If I’m honest I was kind-of confused by this stuff at first for a couple of reasons.

  • If you are coming to it with an APEX_JSON mindset it’s easy to miss the point. Once you “empty your cup” it’s pretty straight forward.
  • The documentation is pretty terrible at the moment. There are lots of mistakes. I tweeted about this the other day and some folks from the Oracle documentation team got back to me about it. I gave them some examples of the problems, so hopefully it will get cleaned up soon!

I was originally intending to write a single article covering both these JSON new features, but it got clumsy, so I separated them.

The second one isn’t much more than a glorified links page at the moment, but as I cover the remaining functionality it will either expand or contain more links depending on the nature of the new material. Big stuff will go in a separate article. Small stuff will be included in this one.

I also added a new section to this recent ORDS article, giving an example of handling the JSON payload using the new object types.

I’ve only scratched the surface of this stuff, so I’ll probably revisit the articles several times as I become more confident with it.

Cheers

Tim…

PS. Remember, you can practice a lot of this 12.2 stuff for free at https://livesql.oracle.com .

PL/SQL Objects for JSON in Oracle 12cR2 was first posted on December 26, 2016 at 8:37 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ADF REST Framework Version 2 (and later) - 12.2.1.2

Andrejus Baranovski - Mon, 2016-12-26 06:40
While building our new Oracle Cloud application with ADF BC REST and JET, I have discovered not announced feature in ADF BC REST 12.2.1.2. Starting from 12.2.1.2 ADF BC REST offers runtime versions. This is configurable in adf-config.xml file or could be provided through REST request header. ADF 12.2.1.2 supports version 1, 2 and 3. Version 2 offers better query support, while version 3 provides better response for hierarchical data - 16.5.2 What You May Need to Know About Versioning the ADF REST Framework.

You can specify version in adf-config.xml, as per documentation:


Version 2 offers more advanced support for data query. Besides query by example from version 1, we could use advanced query syntax - 22.5.4 Filtering a Resource Collection with a Query Parameter. For example, like operator wasn't supported in version 1:


It is supported in version 2. I could specify version 2 directly in REST request header as in example below:


Download ADF BC REST sample from GitHub repository - jetcrud.

A Guide to the Oracle TRUNCATE TABLE Statement

Complete IT Professional - Mon, 2016-12-26 05:00
The Oracle TRUNCATE TABLE statement is a useful statement for removing data in Oracle SQL. I’ll explain what the statement does and show you some examples in this article. What Does the Oracle TRUNCATE TABLE Statement Do? The Oracle TRUNCATE statement, or TRUNCATE TABLE statement, removes all data from a table. It’s similar to the […]
Categories: Development

Oracle JDeveloper (SOA and BPM) 12c (12.2.1.2.0) - Download Temporarily Not Available

Andrejus Baranovski - Mon, 2016-12-26 04:40
If you try to download JDeveloper (as well as SOA Suite or BPM Suite) from OTN - you will see a message on OTN download section - "This page is temporarily not available we'll be back soon".

You should not worry, as per Shay Shmeltzer answer on OTN Forum - "We discovered an issue with the installer - we are working to fix this. Once we have the updated installer we'll update the forum and the pages." Read more here.

There is solution - if you need urgently to download JDeveloper, go to Oracle Software Delivery Cloud and download from there.

Links for 2016-12-25 [del.icio.us]

Categories: DBA Blogs

12cR1 RAC Posts -- 1 : Grid Infrastructure Install completed (first cycle)

Hemant K Chitale - Sat, 2016-12-24 09:17
Just as I had posted 11gR2 RAC Posts in 2014  (listed here), I plan to post some 12cR1 RAC (GI, ASM) posts over the next few weeks.

Here's my Grid Infrastructure up and running.  (Yes, I used racattack for this first 12cR1 setup.

[root@collabn1 ~]# crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.SHARED.advm
ONLINE ONLINE collabn1 Volume device /dev/a
sm/shared-141 is onl
ine,STABLE
ONLINE ONLINE collabn2 Volume device /dev/a
sm/shared-141 is onl
ine,STABLE
ora.DATA.dg
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.FRA.dg
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.asm
ONLINE ONLINE collabn1 Started,STABLE
ONLINE ONLINE collabn2 Started,STABLE
ora.data.shared.acfs
ONLINE ONLINE collabn1 mounted on /shared,S
TABLE
ONLINE ONLINE collabn2 mounted on /shared,S
TABLE
ora.net1.network
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.ons
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE collabn2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE collabn1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE collabn1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE collabn1 169.254.3.70 172.16.
100.51,STABLE
ora.collabn1.vip
1 ONLINE ONLINE collabn1 STABLE
ora.collabn2.vip
1 ONLINE ONLINE collabn2 STABLE
ora.cvu
1 ONLINE ONLINE collabn1 STABLE
ora.mgmtdb
1 ONLINE ONLINE collabn1 Open,STABLE
ora.oc4j
1 ONLINE ONLINE collabn1 STABLE
ora.scan1.vip
1 ONLINE ONLINE collabn2 STABLE
ora.scan2.vip
1 ONLINE ONLINE collabn1 STABLE
ora.scan3.vip
1 ONLINE ONLINE collabn1 STABLE
--------------------------------------------------------------------------------
[root@collabn1 ~]#
[root@collabn1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 96fbcb40bfeb4ff7bf18881adcfef149 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
[root@collabn1 ~]#
[root@collabn1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1632
Available space (kbytes) : 407936
ID : 827167720
Device/File Name : +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 ~]#
[root@collabn1 ~]# nslookup collabn-cluster-scan
Server: 192.168.78.51
Address: 192.168.78.51#53

Name: collabn-cluster-scan.racattack
Address: 192.168.78.252
Name: collabn-cluster-scan.racattack
Address: 192.168.78.253
Name: collabn-cluster-scan.racattack
Address: 192.168.78.251

[root@collabn1 ~]#


I hope to run a few cycles of setups, switching to different node names, IPs, DiskGroup names etc over the next few weeks).
.
.
.

Categories: DBA Blogs

HAPPY HOLIDAYS @OracleIMC



We share our skills to maximize your revenue!
Categories: DBA Blogs

Target=_Blank for Cards report template

Jeff Kemp - Sat, 2016-12-24 02:20

cardsreport.PNGI wanted to use the “Cards” report template for a small report which lists file attachments. When the user clicks on one of the cards, the file should download and open in a new tab/window. Unfortunately, the Cards report template does not include a placeholder for extra attributes for the anchor tag, so it won’t let me add “target=_blank” like I would normally.

One solution is to edit the Cards template to add the extra placeholder; however, this means breaking the subscription from the universal theme.

As a workaround for this I’ve added a small bit of javascript to add the attribute after page load, and whenever the region is refreshed.

  • Set report static ID, e.g. “mycardsreport”
  • Add Dynamic Action:
    • Event = After Refresh
    • Selection Type = Region
    • Region = (the region)
  • Add True Action: Execute JavaScript Code
    • Code = $("#mycardsreport a.t-Card-wrap").attr("target","_blank"); (replace the report static ID in the selector)
    • Fire On Page Load = Yes

Note: this code affects all cards in the chosen report.


Filed under: APEX Tagged: APEX, javascript, jQuery, tips-&-tricks

First touch with Oracle Container Cloud Service

Marcelo Ochoa - Fri, 2016-12-23 17:52
Some weeks ago Oracle releases Oracle Container Cloud Service, Oracle's support for Docker containers.
It basically provides several Linux machines to run Docker containers, one of them is designated as controller of the cluster services, here a simple screen shot of the service view:


by clicking on link Container Console you will access to the manager instance

the admin password for this console was defined during the service creation, here the look & feel the container console:
Basically the steps for using your containers are:

  • Define a service
  • Deploy a service
  • Monitor your running containers

Define a service is basically a web representation of docker run command, supported version is Docker 1.10, several services are defined as examples, it includes Apache/PHP web server, Jenkins, Maria DB among others.
I tested this functionality adding a service for running Oracle 12cR1, the image for running an Oracle 12c is not in at a public repository, so you have to define a private registry for Docker images, remember that you must not push Oracle binary in public repository because you violate the license term.
So if  you follow this guide you can build your own registry server, but this is not enough because the registry server should be enabled using https for security reason, so follow this guide you could put NGinx reverse proxy and SSL certificate signed by LetsEncrypt, but to get a free SSL certificate this registry server should be register in a public DNS server.
If you get a registry server up, running and accesible through Internet over https this server could be added at the section Registries->New Registry, mandatory entries are:
Email: user@domain
URL: server.domain.com
UserName: user_name
Password: your_password
Description: A description textPort 443 of the SSL traffic is not required, the URL will be translated for example to https://server.domain.com:443/v2/, registry server will ask for HTTP authenticated user and the Cloud Service will provide the UserName and Password values.
Once you have a registry server up, running and registered at the Cloud Service you can define your Docker test, the Service builder page look like:
the Builder pane is a graphical representation of the docker run command, the image name (in black) must includes 443 port, for example server.domain.com:443/oracle/database:12.1.0.2-ee.
The process to build above image in your local machine is:
$ cd dockerfiles
$ ./buildDockerImage.sh -v 12.1.0.2 -e
$ docker login -u user_name -p user_pwd server.domain.com:443
$ docker tag oracle/database:12.1.0.2-ee server.domain.com:443/oracle/database:12.1.0.2-ee
$ docker push server.domain.com:443/oracle/database:12.1.0.2-ee
then when you Deploy your service the Cloud Service will pull above image from your private registry.
The idea of building an Oracle 12cR1 EE image and test it using the Oracle Container Cloud Service is for comparing the performance against the DBAAS and IAAS testing. The result is:
OCCS
Max IOPS = 1387654
Max MBPS = 21313
Latency  = 0not so bad, under the hood this Oracle RDBMS is running on Oracle Linux Server release 6.6.89/Docker 1.10,  4 x Intel(R) Xeon(R) CPU - 16Gb RAM. the file system seem to be XFS.

Drawbacks:- By not proving a private registry where you can build/pull/push your custom images the usage of the service could be complicated for most of non experimented Docker users.
- I can't find a way to define a shared filesystem for a given volume, for example, above deployment puts Oracle Datafiles into an internal container volume, if you stop your deployment all the data is lost, the only possibility is pause/unpause the service if you want not to loose your data. Note: at the Service Editor (Tips & Trick button) there is an example defining an external volume as /NFS/{{.EnvironmentID}}/{{.DeploymentID}}/{{.ServiceID}}/{{.ServiceSlot}}:/mnt/data, but it didn't work for me.
- You can't modify Hosts OS parameters, so for example if you want to deploy an ElasticSearch cluster is necessary to change at /etc/sysctl.conf file vm.max_map_count=262144, so is limited environment also for a simple test case.
- Docker version is 1.10.3, which means, if you want to deploy Oracle XE it doesn't work because --shm-size=1g is not supported
- Some time the Container Cloud Console kill my Chrome browser, Linux or Windows version, here the screenshot, seem to be a JavaScript problem:
Final thoughtsThe Containers Cloud Service console is a good abstraction (graphical interface) of typical Docker command line services, for example:
Services -> docker run command
Stacks -> docker-compose command, docker-compose.yml (graphical interface)
Hosts -> Bare metal/VM servers
In my personal opinion if I have to deploy a docker complex installation I'll deploy a set Oracle Compute Cloud Service running Oracle Linux/Ubuntu installations with latest Docker release and docker swarm native service, why?

  • It run faster, see my previous post.
  • I have control of the host parameter (sysctl among others, see host setup section)
  • I can define shared filesystem in ext4 or NFS partitions
  • I can build my own images by using Dockerfile commands
  • I can deploy/manage the infrastructure (nodes/network) using docker swarm command.







Primary Storage, Snapshots, Databases, Backup, and Archival.

Kubilay Çilkara - Fri, 2016-12-23 13:34
Data in the enterprise comes in many forms. Simple flat files, transactional databases, scratch files, complex binary blobs, encrypted files, and whole block devices, and filesystem metadata. Simple flat files, such as documents, images, application and operating system files are by far the easiest to manage. These files can simply be scanned for access time to be sorted and managed for backup and archival. Some systems can even transparently symlink these files to other locations for archival purposes. In general, basic files in this category are opened and closed in rapid succession, and actually rarely change. This makes them ideal for backup as they can be copied as they are, and in the distant past, they were all that there was and that was enough.

Then came multitasking. With the introduction of multiple programs running in a virtual memory space, it became possible that files could be opened by two different applications at once. It became also possible that these locked files could be opened and changed in memory without being synchronized back to disk. So elaborate systems were developed to handle file locks, and buffers that flush their changes back to those files on a periodic or triggered basis. Databases in this space were always open, and could not be backed up as they were. Every transaction was logged to a separate set of files,  which could be played back to restore the database to functionality. This is still in use today, as reading the entire database may not be possible, or performant in a production system. This is called a transaction log. Mail servers, database management systems, and networked applications all had to develop software programming interfaces to backup to a single string of files. Essentially this format is called Tape Archive (tar.)

Eventually and quite recently actually, these systems became so large and complex as to require another layer of interface with the whole filesystem, there were certain applications and operating system files that simply were never closed for copy. The concept of Copy on Write was born. The entire filesystem was essentially always closed, and any writes were written as an incremental or completely new file, and the old one was marked for deletion. Filesystems in this modern era progressively implemented more pure copy on write transaction based journaling so files could be assured intact on system failure, and could be read for archival, or multiple application access. Keep in mind this is a one paragraph summation of 25 years of filesystem technology, and not specifically applicable to any single filesystem.

Along with journaling, which allowed a system to retain filesystem integrity, there came an idea that the files could intelligently retain the old copies of these files, and the state of the filesystem itself, as something called a snapshot. All of this stems from the microcosm of databases applied to general filesystems. Again databases still need to be backed up and accessed through controlled methods, but slowly the features of databases find their way into operating systems and filesystems. Modern filesystems use shadow copies and snapshotting to allow rollback of file changes, complete system restore, and undeletion of files as long as the free space hasn’t been reallocated.

Which brings us to my next point which is the difference between a backup or archive, and a snapshot. A snapshot is a picture of what a disk used to be. This picture is kept on the same disk, and in the event of a physical media failure or overuse of the disk itself, is in totality useless. There needs to be sufficient free space on the disk to hold the old snapshots, and if the disk fails, all is still lost.  As media redundancy is easily managed to virtually preclude failure, space considerations especially in aged or unmanaged filesystems, can easily get out of hand. The effect of a filesystem growing near to capacity is essentially a limitation of usable features. As time moves on, simple file rollback features will lose all effectiveness, and users will have to go to the backup to find replacements.

There are products and systems to automatically compress and move files that are unlikely to be accessed in the near future. These systems usually create a separate filesystem and replace your files with links to that system. This has the net effect of reducing the primary storage footprint, the backup load, and allowing your filesystem to grow effectively forever. In general, this is not such a good thing as it sounds, as the archive storage may still fill up, and you then have an effective filesystem that is larger than the maximum theoretical size, which will have to be forcibly pruned to ever restore properly. Also, your backup system, if the archive system is not integrated, probably will be unaware of the archive system. This would mean that the archived data would be lost in the event of a disaster or catastrophe.

Which brings about another point, whatever your backup vendor supports, you are effectively bound to use those products for the life of the backup system. This may be ten or more years and may impact business flexibility. Enterprise business systems backup products easily can cost dozens of thousands per year, and however flexible your systems need to be, so your must your backup vendor provide.


Long term planning and backup systems go hand in hand. Ideally, you should be shooting for a 7 or 12-year lifespan for these systems. They should be able to scale in features and load for the predicted curve of growth with a very wide margin for error. Conservatively, you should plan on a 25% data growth rate per year minimum. Generally speaking 50 to 100% is far more likely.  Highly integrated backup systems truly are a requirement of Information Services, and while costly, failure to effectively plan for disaster or catastrophe will lead to and end of business continuity, and likely the continuity of your employment.

Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.
Categories: DBA Blogs

IT-Tage 2016 Informatik aktuell: feedback

Yann Neuhaus - Fri, 2016-12-23 09:07

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.

IMG_3808

I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.

IMG_3823

This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”

 

Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator