Feed aggregator

Spark Streaming and Kafka - Creating a New Kafka Connector

Rittman Mead Consulting - Thu, 2019-02-21 08:39
More Kafka and Spark, please!

Hello, world!

Having joined Rittman Mead more than 6 years ago, the time has come for my first blog post. Let me start by standing on the shoulders of blogging giants, revisiting Robin's old blog post Getting Started with Spark Streaming, Python, and Kafka.

The blog post was very popular, touching on the subjects of Big Data and Data Streaming. To put my own twist on it, I decided to:

  • not use Twitter as my data source, because there surely must be other interesting data sources out there,
  • use Scala, my favourite programming language, to see how different the experience is from using Python.
Why Scala?

Scala is admittedly more challenging to master than Python. However, because Scala compiles into Java bytecode, it can be used pretty much anywhere where Java is being used. And Java is being used everywhere. Python is arguably even more widely used than Java, however it remains a dynamically typed scripting language that is easy to write in but can be hard to debug.

Is there a case for using Scala instead of Python for the job? Both Spark and Kafka were written in Scala (and Java), hence they should get on like a house on fire, I thought. Well, we are about to find out.

My data source: OpenWeatherMap

When it comes to finding sample data sources for data analysis, the selection out there is amazing. At the time of this writing, Kaggle offers freely available 14,470 datasets, many of them in easy-to-digest formats like CSV and JSON. However, when it comes to real-time sample data streams, the selection is quite limited. Twitter is usually the go-to choice - easily accessible and well documented. Too bad I decided not to use Twitter as my source.

Another alternative is the Wikipedia Recent changes stream. Although in the stream schema there are a few values that would be interesting to analyse, overall this stream is more boring than it sounds - the text changes themselves are not included.

Fortunately, I came across the OpenWeatherMap real-time weather data website. They have a free API tier, which is limited to 1 request per second, which is quite enough for tracking changes in weather. Their different API schemas return plenty of numeric and textual data, all interesting for analysis. The APIs work in a very standard way - first you apply for an API key. With the key you can query the API with a simple HTTP GET request (Apply for your own API key instead of using the sample one - it is easy.):

This request

https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22

gives the following result:

{
  "coord": {"lon":-0.13,"lat":51.51},
  "weather":[
    {"id":300,"main":"Drizzle","description":"light intensity drizzle","icon":"09d"}
  ],
  "base":"stations",
  "main": {"temp":280.32,"pressure":1012,"humidity":81,"temp_min":279.15,"temp_max":281.15},
  "visibility":10000,
  "wind": {"speed":4.1,"deg":80},
  "clouds": {"all":90},
  "dt":1485789600,
  "sys": {"type":1,"id":5091,"message":0.0103,"country":"GB","sunrise":1485762037,"sunset":1485794875},
  "id":2643743,
  "name":"London",
  "cod":200
}
Getting data into Kafka - considering the options

There are several options for getting your data into a Kafka topic. If the data will be produced by your application, you should use the Kafka Producer Java API. You can also develop Kafka Producers in .Net (usually C#), C, C++, Python, Go. The Java API can be used by any programming language that compiles to Java bytecode, including Scala. Moreover, there are Scala wrappers for the Java API: skafka by Evolution Gaming and Scala Kafka Client by cakesolutions.

OpenWeatherMap is not my application and what I need is integration between its API and Kafka. I could cheat and implement a program that would consume OpenWeatherMap's records and produce records for Kafka. The right way of doing that however is by using Kafka Source connectors, for which there is an API: the Connect API. Unlike the Producers, which can be written in many programming languages, for the Connectors I could only find a Java API. I could not find any nice Scala wrappers for it. On the upside, the Confluent's Connector Developer Guide is excellent, rich in detail though not quite a step-by-step cookbook.

However, before we decide to develop our own Kafka connector, we must check for existing connectors. The first place to go is Confluent Hub. There are quite a few connectors there, complete with installation instructions, ranging from connectors for particular environments like Salesforce, SAP, IRC, Twitter to ones integrating with databases like MS SQL, Cassandra. There is also a connector for HDFS and a generic JDBC connector. Is there one for HTTP integration? Looks like we are in luck: there is one! However, this connector turns out to be a Sink connector.

Ah, yes, I should have mentioned - there are two flavours of Kafka Connectors: the Kafka-inbound are called Source Connectors and the Kafka-outbound are Sink Connectors. And the HTTP connector in Confluent Hub is Sink only.

Googling for Kafka HTTP Source Connectors gives few interesting results. The best I could find was Pegerto's Kafka Connect HTTP Source Connector. Contrary to what the repository name suggests, the implementation is quite domain-specific, for extracting Stock prices from particular web sites and has very little error handling. Searching Scaladex for 'Kafka connector' does yield quite a few results but nothing for http. However, there I found Agoda's nice and simple Source JDBC connector (though for a very old version of Kafka), written in Scala. (Do not use this connector for JDBC sources, instead use the one by Confluent.) I can use this as an example to implement my own.

Creating a custom Kafka Source Connector

The best place to start when implementing your own Source Connector is the Confluent Connector Development Guide. The guide uses JDBC as an example. Our source is a HTTP API so early on we must establish if our data source is partitioned, do we need to manage offsets for it and what is the schema going to look like.

Partitions

Is our data source partitioned? A partition is a division of source records that usually depends on the source medium. For example, if we are reading our data from CSV files, we can consider the different CSV files to be a natural partition of our source data. Another example of partitioning could be database tables. But in both cases the best partitioning approach depends on the data being gathered and its usage. In our case, there is only one API URL and we are only ever requesting current data. If we were to query weather data for different cities, that would be a very good partitioning - by city. Partitioning would allow us to parallelise the Connector data gathering - each partition would be processed by a separate task. To make my life easier, I am going to have only one partition.

Offsets

Offsets are for keeping track of the records already read and the records yet to be read. An example of that is reading the data from a file that is continuously being appended - there can be rows already inserted into a Kafka topic and we do not
want to process them again to avoid duplication. Why would that be a problem? Surely, when going through a source file row by row, we know which row we are looking at. Anything above the current row is processed, anything below - new records. Unfortunately, most of the time it is not as simple as that: first of all Kafka supports concurrency, meaning there can be more than one Task busy processing Source records. Another consideration is resilience - if a Kafka Task process fails,
another process will be started up to continue the job. This can be an important consideration when developing a Kafka Source Connector.

Is it relevant for our HTTP API connector? We are only ever requesting current weather data. If our process fails, we may miss some time periods but we cannot recover then later on. Offset management is not required for our simple connector.

So that is Partitions and Offsets dealt with. Can we make our lives just a bit more difficult? Fortunately, we can. We can create a custom Schema and then parse the source data to populate a Schema-based Structure. But we will come to that later.
First let us establish the Framework for our Source Connector.

Source Connector - the Framework

The starting point for our Source Connector are two Java API classes: SourceConnector and SourceTask. We will put them into separate .scala source files but they are shown here together:

import org.apache.kafka.connect.source.{SourceConnector, SourceTask}

class HttpSourceConnector extends SourceConnector {...}
class HttpSourceTask extends SourceTask {...}

These two classes will be the basis for our Source Connector implementation:

  • HttpSourceConnector represents the Connector process management. Each Connector process will have only one SourceConnector instance.
  • HttpSourceTask represents the Kafka task doing the actual data integration work. There can be one or many Tasks active for an active SourceConnector instance.

We will have some additional classes for config and for HTTP access.
But first let us look at each of the two classes in more detail.

SourceConnector class

SourceConnector is an abstract class that defines an interface that our HttpSourceConnector needs to adhere to. The first function we need to override is config:

  private val configDef: ConfigDef =
      new ConfigDef()
          .define(HttpSourceConnectorConstants.HTTP_URL_CONFIG, Type.STRING, Importance.HIGH, "Web API Access URL")
          .define(HttpSourceConnectorConstants.API_KEY_CONFIG, Type.STRING, Importance.HIGH, "Web API Access Key")
          .define(HttpSourceConnectorConstants.API_PARAMS_CONFIG, Type.STRING, Importance.HIGH, "Web API additional config parameters")
          .define(HttpSourceConnectorConstants.SERVICE_CONFIG, Type.STRING, Importance.HIGH, "Kafka Service name")
          .define(HttpSourceConnectorConstants.TOPIC_CONFIG, Type.STRING, Importance.HIGH, "Kafka Topic name")
          .define(HttpSourceConnectorConstants.POLL_INTERVAL_MS_CONFIG, Type.STRING, Importance.HIGH, "Polling interval in milliseconds")
          .define(HttpSourceConnectorConstants.TASKS_MAX_CONFIG, Type.INT, Importance.HIGH, "Kafka Connector Max Tasks")
          .define(HttpSourceConnectorConstants.CONNECTOR_CLASS, Type.STRING, Importance.HIGH, "Kafka Connector Class Name (full class path)")

  override def config: ConfigDef = configDef

This is validation for all the required configuration parameters. We also provide a description for each configuration parameter, that will be shown in the missing configuration error message.

HttpSourceConnectorConstants is an object where config parameter names are defined - these configuration parameters must be provided in the connector configuration file:

object HttpSourceConnectorConstants {
  val HTTP_URL_CONFIG               = "http.url"
  val API_KEY_CONFIG                = "http.api.key"
  val API_PARAMS_CONFIG             = "http.api.params"
  val SERVICE_CONFIG                = "service.name"
  val TOPIC_CONFIG                  = "topic"
  val TASKS_MAX_CONFIG              = "tasks.max"
  val CONNECTOR_CLASS               = "connector.class"

  val POLL_INTERVAL_MS_CONFIG       = "poll.interval.ms"
  val POLL_INTERVAL_MS_DEFAULT      = "5000"
}

Another simple function to be overridden is taskClass - for the SourceConnector class to know its corresponding SourceTask class.

  override def taskClass(): Class[_ <: SourceTask] = classOf[HttpSourceTask]

The last two functions to be overridden here are start and stop. These are called upon the creation and termination of a SourceConnector instance (not Task instance). JavaMap here is an alias for java.util.Map - a Java Map, which is not to be confused with the native Scala Map - that cannot be used here. (If you are a Python developer, a Map in Java/Scala is similar to the Python dictionary, but strongly typed.) The interface requires Java data structures, but that is fine - we can convert them from one to another. By far the biggest problem here is the assignment of the connectorConfig variable - we cannot have a functional programming friendly immutable value here. The variable is defined at the class level

  private var connectorConfig: HttpSourceConnectorConfig = _

and is set in the start function and then referred to in the taskConfigs function further down. This does not look pretty in Scala. Hopefully somebody will write a Scala wrapper for this interface.

Because there is no logout/shutdown/sign-out required for the HTTP API, the stop function just writes a log message.

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try (new HttpSourceConnectorConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => connectorConfig = cfg
      case Failure(err) => connectorLogger.error(s"Could not start Kafka Source Connector ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }
  }

  override def stop(): Unit = {
    connectorLogger.info(s"Stopping Kafka Source Connector ${this.getClass.getName}.")
  }

HttpSourceConnectorConfig is a thin wrapper class for the configuration.

We are almost done here. The last function to be overridden is taskConfigs.
This function is in charge of producing (potentially different) configurations for different Source Tasks. In our case, there is no reason for the Source Task configurations to differ. In fact, our HTTP API will benefit little from parallelism, so, to keep things simple, we can assume the number of tasks always to be 1.

  override def taskConfigs(maxTasks: Int): JavaList[JavaMap[String, String]] = List(connectorConfig.connectorProperties.asJava).asJava

The name of the taskConfigs function was changed in the Kafka version 2.1.0 - please consider that when using this code for older Kafka versions.

Source Task class

In a similar manner to the Source Connector class, we implement the Source Task abstract class. It is only slightly more complex than the Connector class.

Just like for the Connector, there are start and stop functions to be overridden for the Task.

Remember the taskConfigs function from above? This is where task configuration ends up - it is passed to the Task's start function. Also, similarly to the Connector's start function, we parse the connection properties with HttpSourceTaskConfig, which is the same as HttpSourceConnectorConfig - configuration for Connector and Task in our case is the same.

We also set up the Http service that we are going to use in the poll function - we create an instance of the WeatherHttpService class. (Please note that start is executed only once, upon the creation of the task and not every time a record is polled from the data source.)

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try(new HttpSourceTaskConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => taskConfig = cfg
      case Failure(err) => taskLogger.error(s"Could not start Task ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }

    val apiHttpUrl: String = taskConfig.getApiHttpUrl
    val apiKey: String = taskConfig.getApiKey
    val apiParams: Map[String, String] = taskConfig.getApiParams

    val pollInterval: Long = taskConfig.getPollInterval

    taskLogger.info(s"Setting up an HTTP service for ${apiHttpUrl}...")
    Try( new WeatherHttpService(taskConfig.getTopic, taskConfig.getService, apiHttpUrl, apiKey, apiParams) ) match {
      case Success(service) =>  sourceService = service
      case Failure(error) =>    taskLogger.error(s"Could not establish an HTTP service to ${apiHttpUrl}")
                                throw error
    }

    taskLogger.info(s"Starting to fetch from ${apiHttpUrl} each ${pollInterval}ms...")
    running = new JavaBoolean(true)
  }

The Task also has the stop function. But, just like for the Connector, it does not do much, because there is no need to sign out from an HTTP API session.

Now let us see how we get the data from our HTTP API - by overriding the poll function.

The fetchRecords function uses the sourceService HTTP service initialised in the start function. sourceService's sourceRecords function requests data from the HTTP API.

  override def poll(): JavaList[SourceRecord] = this.synchronized { if(running.get) fetchRecords else null }

  private def fetchRecords: JavaList[SourceRecord] = {
    taskLogger.debug("Polling new data...")

    val pollInterval = taskConfig.getPollInterval
    val startTime    = System.currentTimeMillis

    val fetchedRecords: Seq[SourceRecord] = Try(sourceService.sourceRecords) match {
      case Success(records)                    => if(records.isEmpty) taskLogger.info(s"No data from ${taskConfig.getService}")
                                                  else taskLogger.info(s"Got ${records.size} results for ${taskConfig.getService}")
                                                  records

      case Failure(error: Throwable)           => taskLogger.error(s"Failed to fetch data for ${taskConfig.getService}: ", error)
                                                  Seq.empty[SourceRecord]
    }

    val endTime     = System.currentTimeMillis
    val elapsedTime = endTime - startTime

    if(elapsedTime < pollInterval) Thread.sleep(pollInterval - elapsedTime)

    fetchedRecords.asJava
  }

Phew - that is the interface implementation done. Now for the fun part...

Requesting data from OpenWeatherMap's API

The fun part is rather straightforward. We use the scalaj.http library to issue a very simple HTTP request and get a response.

Our WeatherHttpService implementation will have two functions:

  • httpServiceResponse that will format the request and get data from the API
  • sourceRecords that will parse the Schema and wrap the result within the Kafka SourceRecord class.

Please note that error handling takes place in the fetchRecords function above.

    override def sourceRecords: Seq[SourceRecord] = {
        val weatherResult: HttpResponse[String] = httpServiceResponse
        logger.info(s"Http return code: ${weatherResult.code}")
        val record: Struct = schemaParser.output(weatherResult.body)

        List(
            new SourceRecord(
                Map(HttpSourceConnectorConstants.SERVICE_CONFIG -> serviceName).asJava, // partition
                Map("offset" -> "n/a").asJava, // offset
                topic,
                schemaParser.schema,
                record
            )
        )
    }

    private def httpServiceResponse: HttpResponse[String] = {

        @tailrec
        def addRequestParam(accu: HttpRequest, paramsToAdd: List[(String, String)]): HttpRequest = paramsToAdd match {
            case (paramKey,paramVal) :: rest => addRequestParam(accu.param(paramKey, paramVal), rest)
            case Nil => accu
        }

        val baseRequest = Http(apiBaseUrl).param("APPID",apiKey)
        val request = addRequestParam(baseRequest, apiParams.toList)

        request.asString
    }
Parsing the Schema

Now the last piece of the puzzle - our Schema parsing class.

The short version of it, which would do just fine, is just 2 lines of class (actually - object) body:

object StringSchemaParser extends KafkaSchemaParser[String, String] {
    override val schema: Schema = Schema.STRING_SCHEMA
    override def output(inputString: String) = inputString
}

Here we say we just want to use the pre-defined STRING_SCHEMA value as our schema definition. And pass inputString straight to the output, without any alteration.

Looks too easy, does it not? Schema parsing could be a big part of Source Connector implementation. Let us implement a proper schema parser. Make sure you read the Confluent Developer Guide first.

Our schema parser will be encapsulated into the WeatherSchemaParser object. KafkaSchemaParser is a trait with two type parameters - inbound and outbound data type. This indicates that the Parser receives data in String format and the result is a Kafka's Struct value.

object WeatherSchemaParser extends KafkaSchemaParser[String, Struct]

The first step is to create a schema value with the SchemaBuilder. Our schema is rather large, therefore I will skip most fields. The field names given are a reflection of the hierarchy structure in the source JSON. What we are aiming for is a flat, table-like structure - a likely Schema creation scenario.

For JSON parsing we will be using the Scala Circle library, which in turn is based on the Scala Cats library. (If you are a Python developer, you will see that Scala JSON parsing is a bit more involved (this might be an understatement), but, on the flipside, you can be sure about the result you are getting out of it.)

    override val schema: Schema = SchemaBuilder.struct().name("weatherSchema")
        .field("coord-lon", Schema.FLOAT64_SCHEMA)
        .field("coord-lat", Schema.FLOAT64_SCHEMA)

        .field("weather-id", Schema.FLOAT64_SCHEMA)
        .field("weather-main", Schema.STRING_SCHEMA)
        .field("weather-description", Schema.STRING_SCHEMA)
        .field("weather-icon", Schema.STRING_SCHEMA)
        
        // ...
        
        .field("rain", Schema.FLOAT64_SCHEMA)
        
        // ...

Next we define case classes, into which we will be parsing the JSON content.

   case class Coord(lon: Double, lat: Double)
   case class WeatherAtom(id: Double, main: String, description: String, icon: String)

That is easy enough. Please note that the case class attribute names match one-to-one with the attribute names in JSON. However, our Weather JSON schema is rather relaxed when it comes to attribute naming. You can have names like type and 3h, both of which are invalid value names in Scala. What do we do? We give the attributes valid Scala names and then implement a decoder:

    case class Rain(threeHours: Double)
    object Rain {
        implicit val decoder: Decoder[Rain] = Decoder.instance { h =>
            for {
                threeHours <- h.get[Double]("3h")
            } yield Rain(
                threeHours
            )
        }
    }

The rain case class is rather short, with only one attribute. The corresponding JSON name was 3h. We map '3h' to the Scala attribute threeHours.

Not quite as simple as JSON parsing in Python, is it?

In the end, we assemble all sub-case classes into the WeatherSchema case class, representing the whole result JSON.

    case class WeatherSchema(
                                coord: Coord,
                                weather: List[WeatherAtom],
                                base: String,
                                mainVal: Main,
                                visibility: Double,
                                wind: Wind,
                                clouds: Clouds,
                                dt: Double,
                                sys: Sys,
                                id: Double,
                                name: String,
                                cod: Double
                            )

Now, the parsing itself. (Drums, please!)

structInput here is the input JSON in String format. WeatherSchema is the case class we created above. The Circle decode function returns a Scala Either monad, error on the Left(), successful parsing result on the Right() - nice and tidy. And safe.

        val weatherParsed: WeatherSchema = decode[WeatherSchema](structInput) match {
            case Left(error) => {
                logger.error(s"JSON parser error: ${error}")
                emptyWeatherSchema
            }
            case Right(weather) => weather
        }

Now that we have the WeatherSchema object, we can construct our Struct object that will become part of the SourceRecord returned by the sourceRecords function in the WeatherHttpService class. That in turn is called from the HttpSourceTask's poll function that is used to populate the Kafka topic.

        val weatherStruct: Struct = new Struct(schema)
            .put("coord-lon", weatherParsed.coord.lon)
            .put("coord-lat", weatherParsed.coord.lat)

            .put("weather-id", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).id)
            .put("weather-main", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).main)
            .put("weather-description", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).description)
            .put("weather-icon", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).icon)

            // ...

Done!

Considering that Schema parsing in our simple example was optional, creating a Kafka Source Connector for us meant creating a Source Connector class, a Source Task class and a Source Service class.

Creating JAR(s)

JAR creation is described in the Confluent's Connector Development Guide. The guide mentions two options - either all the library dependencies can be added to the target JAR file, a.k.a an 'uber-Jar'. Alternatively, the dependencies can be copied to the target folder. In that case they must all reside in the same folder, with no subfolder structure. For no particular reason, I went with the latter option.

The Developer Guide says it is important not to include the Kafka Connect API libraries there. (Instead they should be added to CLASSPATH.) Please note that for the latest Kafka versions it is advised not to add these custom JARs to CLASSPATH. Instead, we will add them to connectors' plugin.path. But that we will leave for another blog post.

Scala - was it worth using it?

Only if you are a big fan. The code I wrote is very Java-like and it might have been better to write it in Java. However, if somebody writes a Scala wrapper for the Connector interfaces, or, even better, if a Kafka Scala API is released, writing Connectors in Scala would be a very good choice.connector

Categories: BI & Warehousing

[Video 3 of 5] Oracle Cloud: Create VCN, Subnet, Firewall (Security List), IGW, DRG: Step By Step

Online Apps DBA - Thu, 2019-02-21 08:22

You are asked to deploy a three tier highly availability application including Load Balancer on Oracle’s Gen 2 Cloud Then, ✔ How would you define Network, Subnet & Firewalls in Oracle’s Gen2 Cloud? ✔ How do you allow database port but only from Application Tier ? ✔ What are Ingress or Egress Security Rules ? […]

The post [Video 3 of 5] Oracle Cloud: Create VCN, Subnet, Firewall (Security List), IGW, DRG: Step By Step appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Simplest Automation: Use Environment Variables

Michael Dinh - Thu, 2019-02-21 07:25

Copy the last five Goldengate trail files from source to destination.

Here are high level steps:

Copy trail with prefix (aa*) to new destination:
1. export OLD_DIRDAT=/media/patch/dirdat
2. export NEW_DIRDAT=/media/swrepo/dirdat
3. export TRAIL_PREFIX=rt*
5. ls -l $NEW_DIRDAT
6. ls $OLD_DIRDAT/$TRAIL_PREFIX | head -5
7. ls $OLD_DIRDAT/$TRAIL_PREFIX | tail -5
8. cp -fv $(ls $OLD_DIRDAT/$TRAIL_PREFIX | tail -5) $NEW_DIRDAT
9. ls -l $NEW_DIRDAT/*

Copy trail with prefix (ab*) to new destination:
export TRAIL_PREFIX=ab*
Repeat steps 5-9

Oracle Survey Finds Enterprises Ready for Benefits of 5G

Oracle Press Releases - Thu, 2019-02-21 07:00
Press Release
Oracle Survey Finds Enterprises Ready for Benefits of 5G Nearly 80 percent of Companies Expect to Deploy Basic Connectivity Solutions by 2021 to Spawn New Revenue Opportunities and Tap Advantages of IoT and Smart Ecosystems

Redwood Shores, Calif.—Feb 21, 2019

While much of the discussion around 5G has centered on consumer devices, enterprises are looking towards the tremendous impact the technology can have on their ability to serve customers and the bottom line. Not only are most companies (97 percent) aware of the benefits of 5G, but 95 percent are already strategically planning how they will take advantage of this next generation of wireless connectivity to power core business initiatives from new services, to IoT and smart ecosystems.

The Oracle Communications study, “5G Smart Ecosystems Are Transforming the Enterprise – Are You Ready?,” surveyed 265 enterprise IT and business decision makers at medium and large enterprises globally in December, 2018 to find out how businesses are thinking about 5G today and its potential significance moving forward.

“Enterprises clearly want to capitalize on the promise of 5G, however, to be successful, IT and business leaders must avoid thinking of 5G as just another ‘G,’ and should instead consider it as an enabler to the smart ecosystem we have long talked about,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “This means asking the right questions at the outset, and considering how 5G can help enable upcoming solutions, what timeframe should be considered and how will they will procure and use 5G capabilities as part of their business evolution.”

Born in the cloud, 5G will have the ability to enable enterprises to provision or “slice” core pieces of their networks to power mission-critical new offerings and smart ecosystems. This can range from anything to providing the highest speed connections for life-saving 911 services, to enabling autonomous vehicles to communicate with each other quickly; to ensuring IoT devices in smart factories are providing real-time information on the health of machines and assets.

Outside specific initiatives, respondents believe 5G will have a wide-spread impact across their business, including increasing employee productivity (86 percent), reducing costs (84 percent), enhancing customer experience (83 percent), and improving agility (83 percent). Business decision makers are most focused on quality of experience the technology will bring, while IT is concerned with network speed and resiliency.

Unleashing the Promise of 5G Ecosystems in the Enterprise

When it comes to 5G, enterprise are most focused on:

  • Unlocking the potential of IoT: Beyond initial benefits such as speed and quality of experience, 84 percent of respondents feel that 5G networks will be transformative and have a lasting impact on the way their companies do business. Another 73 percent agree the IoT will be revolutionized by 5G networks and 68 percent feel it will be transformative to their customers.
  • Monetizing new services: Eighty-percent expect 5G to generate new revenue streams for their business. Forty-one percent of the respondents would deploy new monetization solutions specifically for 5G services alongside existing systems, while thirty-four percent say they would replace their existing systems with a single, converged solution for all services. Just about one in five (22 percent) said they will utilize and extend existing monetization solutions with 5G.
  • Experience and efficiencies: Eighty-four percent of respondents agree that 5G networks will be transformative and have a lasting impact on the way their companies do business. While business respondents are focused on the quality of experience improvements made possible by 5G, IT respondents care more about the network technologies and the internal efficiencies 5G may enable.
  • Security: While excited about the potential of 5G, both business and IT respondents cited security as a top priority. 51% of respondents ranked security as their highest concern.

Oracle’s survey also explored 5G’s potential role in solutions as varied as live streaming, industrial automation, smart homes and buildings, connected vehicles, immersive gaming, augmented and virtual reality. To learn more, click here to access the report or infographic.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Amy Dalkoff
Hill+Knowlton Strategies
+1.312.255.3078
amy.dalkoff@hkstrategies.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Amy Dalkoff

  • +1.312.255.3078

[BLOG] Oracle EBS (R12) On Cloud (OCI): High Level Steps

Online Apps DBA - Thu, 2019-02-21 04:20

Do you wish to deploy Oracle EBS (R12) on Oracle Cloud Infrastructure (OCI) using EBS Cloud Manager, but are unable to do so because of lack of knowledge of what steps to follow? If yes, then Don’t You Worry!! We’ve got you covered. Visit: https://k21academy.com/ebscloud28 & know about: ✔Background History ✔How To Deploy EBS Cloud […]

The post [BLOG] Oracle EBS (R12) On Cloud (OCI): High Level Steps appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Create a primary database using the backup of a standby database on 12cR2

Yann Neuhaus - Thu, 2019-02-21 02:05

The scope of this blog will be to show how to create a primary role database based on a backup of a standby database on 12cR2.

Step1: We are assuming that an auxiliary instance has been created and started in nomount mode.

rman target /
restore primary controlfile from 'backup_location_directory/control_.bkp';
exit;

By specifying “restore primary” , will modify the flag into the controlfile, and will mount a primary role instance instead of a standby one.

Step2: Once mounted the instance, we will restore the backup of the standby database.

run
{
catalog start with 'backup_location_directory';
restore database;
alter database flashback off;
recover database 
}

If in the pfile used to start the instance, you specified the recovery destination and size parameters it will try to enable the flashback.
The flashback enable , before during the recovery is not allowed, so we will deactivate for the moment.

Step3: Restore/recover completed successfully, we will try to open the database, but got some errors:

alter database open :

ORA-03113: end-of-file on communication channel
Process ID: 2588
Session ID: 1705 Serial number: 5

Step4: Fix the errors and try to open the database:

--normal redo log groups
alter database clear unarchived logfile group YYY;

--standby redo log groups
alter database clear unarchived logfile group ZZZ;
alter database drop logfile group ZZZ;

Is not enough. Looking on the database alert log file we can see :

LGWR: Primary database is in MAXIMUM AVAILABILITY mode 
LGWR: Destination LOG_ARCHIVE_DEST_2 is not serviced by LGWR 
LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR 

Errors in file /<TRACE_DESTINATION>_lgwr_1827.trc: 
ORA-16072: a minimum of one standby database destination is required 
LGWR: terminating instance due to error 16072 
Instance terminated by LGWR, pid = 1827

Step5: Complete the opening procedure:

alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
alter database set standby database to maximize performance;

SQL> select name,open_mode,protection_mode from v$database;

NAME      OPEN_MODE            PROTECTION_MODE
--------- -------------------- --------------------
NAME      MOUNTED              MAXIMUM PERFORMANCE

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
NAME      NAME_UNIQUE                    PRIMARY

Cet article Create a primary database using the backup of a standby database on 12cR2 est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 3 – Change Server Config Name

Yann Neuhaus - Thu, 2019-02-21 02:00

In the previous blog I changed the Docbase Name to repository1 instead of RepoTemplate using MigrationUtil, in this blog it is Server Config Name’s turn to be changed.

In general, the repository name and the server config name are the same except in High availability case.
You can find the Server Config Name in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...
1. Migration preparation

To change the server config name to repository1, you need first to update the configuration file of MigrationUtil, like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repository1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeServerName">yes</entry>
<entry key="NewServerName.1">repository1</entry>

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Execute the migration

Use the below script to execute the migration:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration.

2.a Stop the Docbase and the DocBroker

$DOCUMENTUM/dba/dm_shutdown_repository1
$DOCUMENTUM/dba/dm_stop_DocBroker

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = dctmdb.local
...

2.c Execute the migration script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Change...
Skipping Host Name Change...
Skipping Install Owner Change...

Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Changing Server Name...
Database owner password is read from config.xml
Finished changing Server Name...

Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Start: 2019-02-02 19:55:52.531
Changing Server Name
=====================

DocbaseName: repository1
Retrieving server.ini path for docbase: repository1
Found path: /app/dctm/product/16.4/dba/config/repository1/server.ini
ServerName: RepoTemplate
New ServerName: repository1

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Validating Server name with existing servers...
select object_name from dm_sysobject_s where r_object_type = 'dm_server_config'

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_server_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3d0f449880000102'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.repository1' where r_object_id = '080f4498800010a9'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_cont_transfer_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.repository1' where r_object_id = '080f4498800004ba'
select r_object_id,target_server from dm_job_s where target_server like '%repository1.RepoTemplate%'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800010d3'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035e'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035f'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000360'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000361'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000362'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000363'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000364'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000365'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000366'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000367'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000372'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000373'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000374'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000375'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000376'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000377'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000378'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000379'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037a'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037b'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000386'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000387'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000388'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000389'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000e42'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000cb1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d02'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d04'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d05'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003db'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dc'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dd'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003de'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003df'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e0'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e2'
Successfully updated database values...

Processing File changes...
Backed up '/app/dctm/product/16.4/dba/config/repository1/server.ini' to '/app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repository1/server.ini
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Finished processing File changes...
Finished changing server name 'repository1'

Processing startup and shutdown scripts...
Backed up '/app/dctm/product/16.4/dba/dm_start_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup'
Updated dm_startup script.
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup'
Updated dm_shutdown script.

Finished changing server name....
End: 2019-02-02 19:55:54.687

2.d Reset the value of database_conn in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = repository1
database_conn = DCTMDB
...
3. Check after update

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the log to be sure that the repository has been started correctly. Notice that the log name has been changed from RepoTemplate.log to repository1.log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/repository1.log
...
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:09.807613	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29345, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:10.809686	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29362, session 010f44988000000c) is started sucessfully."
4. Manual rollback is possible?

In fact, in the MigrationUtilLogs folder, you can find logs, backup of start/stop scripts, and also the sql file for manual rollback:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
total 980
-rw-rw-r-- 1 dmadmin dmadmin   4323 Feb  2 19:55 ServerNameChange_DatabaseRestore.sql
-rwxrw-r-- 1 dmadmin dmadmin   2687 Feb  2 19:55 dm_start_repository1_server_RepoTemplate.backup
-rwxrw-r-- 1 dmadmin dmadmin   3623 Feb  2 19:55 dm_shutdown_repository1_server_RepoTemplate.backup
-rw-rw-r-- 1 dmadmin dmadmin   6901 Feb  2 19:55 ServerNameChange.log

lets see the content of the sql file :

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql
update dm_sysobject_s set object_name = 'RepoTemplate' where r_object_id = '3d0f449880000102';
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.RepoTemplate' where r_object_id = '080f4498800010a9';
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.RepoTemplate' where r_object_id = '080f4498800004ba';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800010d3';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035e';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035f';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000360';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000361';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000362';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000363';
...

I already noticed that a manual rollback is possible after Docbase ID and Docbase Name change but I didn’t test it… I would like to try this one.
So to rollback:
Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

Execute the sql

[dmadmin@vmtestdctm01 ~]$ cd $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
[dmadmin@vmtestdctm01 MigrationUtilLogs]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 17 19:53:12 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn RepoTemplate@DCTMDB
Enter password: 
Connected.
SQL> @ServerNameChange_DatabaseRestore.sql
1 row updated.
1 row updated.
1 row updated.
...

The DB User is still RepoTemplate, it hasn’t been changed when I changed the docbase name

Copy back the files saved, you can find the list of files updated and saved in the log:

cp /app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup /app/dctm/product/16.4/dba/config/repository1/server.ini
cp /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_start_repository1
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_shutdown_repository1

Think about changing back the the database connection in /app/dctm/product/16.4/dba/config/repository1/server.ini (see 2.d step).

Then start the DocBroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the repository log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-02T20:15:59.677595	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19232, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:00.679566	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19243, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:01.680888	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19255, session 010f44988000000c) is started sucessfully."

Yes, the rollback works correctly! :D Despite this, I hope you will not have to do it on a production environment. ;)

Cet article Documentum – MigrationUtil – 3 – Change Server Config Name est apparu en premier sur Blog dbi services.

Please help to understand how Nested Table cardinality estimation works in Oracle 12C

Tom Kyte - Wed, 2019-02-20 18:26
Hi Team, Request your help with one issue that we are facing. We pass a Nested table as a variable to a select statement. Example SQL is: <code>SELECT CAST ( MULTISET ( SELECT DEPTNBR FROM DEPT ...
Categories: DBA Blogs

Presentation of function result, which is own-type table via SELECT <func> FROM DUAL in sql developer.

Tom Kyte - Wed, 2019-02-20 18:26
Hi TOM, I've created a function, that granting access to tables and views in given schema to given user. In result, function returns a own-type table, that contains prepared statement and exception message, if thrown. 1. Creating types: <code...
Categories: DBA Blogs

With clause in distributed transactions

Tom Kyte - Wed, 2019-02-20 18:26
Hi Tom ! As there is put a restriction on GTTs: Distributed transactions are not supported for temporary tables does that mean that inline views in a query, i.e. using WITH clause, but those with MATERIALIZED hint will not work properly...
Categories: DBA Blogs

opatchauto is not that dumb

Michael Dinh - Wed, 2019-02-20 17:41

I find it ironic that we want to automate yet fear automation.

Per documentation, ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1),
the following patch is required to implement ACFS:

p22810422_12102160419forACFS_Linux-x86-64.zip
Patch for Bug# 22810422
UEKR4 SUPPORT FOR ACFS(Patch 22810422)

I had inquired why opatchauto is not used to patch the entire system versus manual patching for GI ONLY.

To patch GI home and all Oracle RAC database homes of the same version:
# opatchauto apply _UNZIPPED_PATCH_LOCATION_/22810422 -ocmrf _ocm response file_

OCM is not included in OPatch binaries since OPatch version 12.2.0.1.5; therefore, -ocmrf is not needed.
Reason for GI only patching is simply because we only need to do it to enable ACFS support.

Rationale makes sense and typically ACFS is only applied to GI home.

Being curious, shouldn’t opatchauto know what homes to apply the patch where applicable?
Wouldn’t be easier to execute opatchauto versus performing manual steps?

What do you think and which approach would you use?

Here are the results from applying patch 22810422.


[oracle@racnode-dc1-2 22810422]$ pwd
/sf_OracleSoftware/22810422

[oracle@racnode-dc1-2 22810422]$ sudo su -
Last login: Wed Feb 20 23:13:52 CET 2019 on pts/0

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:21:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-21-53PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-22-12PM.log
The id for this session is YBG6

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-22-24PM_1.log



OPatchauto session completed at Wed Feb 20 23:24:53 2019
Time taken to complete the session 3 minutes, 7 seconds


[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:25:12 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-25-19PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-25-38PM.log
The id for this session is 3BYS

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-28-00PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-36-59PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-30-39PM_1.log



OPatchauto session completed at Wed Feb 20 23:40:10 2019
Time taken to complete the session 14 minutes, 58 seconds
[root@racnode-dc1-2 ~]#

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-2 ~]$

====================================================================================================

### Checking resources while patching racnode-dc1-1
[oracle@racnode-dc1-2 ~]$ crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-2            169.254.178.60 172.1
                                                             6.9.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-2 ~]$


====================================================================================================

[oracle@racnode-dc1-1 ~]$ sudo su -
Last login: Wed Feb 20 23:02:19 CET 2019 on pts/0

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"

[root@racnode-dc1-1 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:43:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-43-54PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-44-12PM.log
The id for this session is M9KF

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-44-26PM_1.log



OPatchauto session completed at Wed Feb 20 23:46:31 2019
Time taken to complete the session 2 minutes, 45 seconds

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:47:13 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-47-20PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-47-38PM.log
The id for this session is RHMR

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-50-01PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-58-57PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-52-37PM_1.log



OPatchauto session completed at Thu Feb 21 00:01:15 2019
Time taken to complete the session 14 minutes, 3 seconds

[root@racnode-dc1-1 ~]# logout

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-1 ~]$

Partner Webcast – From Mobile & Chatbots to the new Digital Assistant Cloud

In the last few years we have seen a massive growth in the mobile usage of instant messaging and chat applications. With Oracle Intelligent Bots, an integrated feature of Oracle Mobile Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Intercepting ADF Table Column Show/Hide Event with Custom Change Manager Class

Andrejus Baranovski - Wed, 2019-02-20 14:12
Ever wondered how to intercept ADF table column show/hide event from ADF Panel Collection component? Yes, you could use ADF MDS functionality to store user preference for table visible columns. But what if you would want to implement it yourself without using MDS? Actually, this is possible through custom persistence manager class. I will show you how.

If you don't know what I'm talking about. Check below screenshot, this popup comes out of the box with ADF Panel Collection and it helps to manage table visible columns. Pretty much useful, especially for large tables:


Obviously, we would like to store user preference and next time the user comes back to the form, he should see previously stored setup for the table columns. One way to achieve this is to use out of the box ADF MDS functionality. But what if you don't want to use it? Still possible - we can catch all changes done through Manage Columns popup in custom Change Manager class. Extend from SessionChangeManager and override only a single method - addComponentChange. This is the place where we intercept changes and could log them to DB for example (later on form load, we could read table setup and apply it before fragment is rendered):


Register custom Change Manager class in web.xml:


Manage Columns popup is out of the box functionality offered by ADF Panel Collection component:


Method addComponentChange will be automatically invoked and you should see similar output when changing table columns visibility:


Download sample application code from my GitHub repository.

Quarterly EBS Upgrade Recommendations: February 2019 Edition

Steven Chan - Wed, 2019-02-20 09:41

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for Oracle E-Business Suite updates and technology stack components. These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this MOS Note:

Upgrade Recommendations for February 2019

  EBS 12.2  EBS 12.1  EBS 12.0  EBS 11.5.10 Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to December 30, 2030

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.8
Oct 2018

12.1.3 RPC5
Aug 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb 2016

StartCD 13
Aug 2011

12.0.6


11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 10
Sep 2017

FND
Apr 2017

Web ADI RPC
Jan 2018

Report Manager Bundle 5
Nov 2018

EBS 12.2.7 OAF Update 1
Jan 2019

EBS 12.2.6 OAF Update 16
Nov 2018

EBS 12.2.5 OAF Update 20
Jul 2018

EBS 12.2.4 OAF Update 20
Nov 2018

ETCC
Feb 2019

Web Tier Utilities 11.1.1.9

Daylight Savings Time DSTv32
Dec 2018

Upgrade to JDK 7

Web ADI Bundle 5
Jan 2018

Report Manager Bundle 5
Jan 2018

FND
Apr 2017

OAF Bundle 5
Jun 2016

JTT Update 4
Oct 2016

Daylight Savings Time DSTv32
Dec 2018

Upgrade to JDK 7

N/A

N/A

Apply the latest security updates

Jan 2019 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager to Feb 2020

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Jan 2019 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager to Feb 2020

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Oct. 2015 Critical Patch Update April 2016 Critical Patch Update Use the latest certified desktop components

Switch to Java Web Start

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Upgrade to IE 11

Upgrade to Firefox ESR 60

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

Switch to Java Web Start

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Upgrade to IE 11

Upgrade to Firefox ESR 60

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

N/A N/A Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11gR2 or Oracle Access Manager 12c

Upgrade to Oracle Internet Directory 11gR1 or Oracle Internet Directory 12c or Oracle Unified Directory 11gR2

Migrate from Oracle SSO to Oracle Access Manager 11gR2 or Oracle Access Manager 12c

Upgrade to Oracle Internet Directory 11gR1 or Oracle Internet Directory 12c

N/A N/A If you're using Oracle Discoverer

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA)

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA)

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

N/A N/A If you're using Oracle Portal Migrate to Oracle WebCenter  11.1.1.9 Migrate to Oracle WebCenter 11.1.1.9

N/A

N/A
Categories: APPS Blogs

Cloning is one of the common task (apart from Patching & Troubleshooting).

Online Apps DBA - Wed, 2019-02-20 08:00

There are 3 main steps in Cloning an Oracle EBS Environment Visit: https://k21academy.com/appsdba25 & learn with our [BLOG] Oracle EBS (R12): Database Cloning from RMAN Backup to Practice with 3 Phase Cloning Process: 1. Prepare Source Database System 2. Copy the source database to the target system 3. Configure Target Database System There are 3 […]

The post Cloning is one of the common task (apart from Patching & Troubleshooting). appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

What's new in 19c - Part II (Automatic Storage Management - ASM)

Syed Jaffar - Wed, 2019-02-20 07:32
Not too many features to talk on 19c ASM. Below is my hand-pick features of 19c ASM for this blog post.


Automatic block corruption recovery 

With Oracle 19c, the CONTENT.CHECK disk group attribute on Exadata and cloud environment is set to true by default. During data copy operation, if Oracle ASM relocation process detects block corruption, it perform automatic block corruption recovery by replacing the corrupted blocks with an uncorrupted mirror copy, if one is avialble.


Parity Protected Files Support

The level of data mirroring is controlled through ASM disk group REDUNDANCY attribute. When a two or three way of ASM mirroring is configured to a disk group to store write-once files, like archived logs and backup sets, a great way of space is wasted. To reduce the storage overahead to such file types, ASM now introduced PARITY value with REDUNDANCY file type property. The PARITY value specifies single parity for redundancy. Set the REDUNDANCY settings to PARITY to enable this feature.

The redundancy of a file  can be modified after its creation. When the property is changed from HIGH, NORMAL or UNPROTECTED to PARITY, only the files created after this change will have impact, while the existing files doesn't have any impact.

A few enhancements are done in Oracle ACFS, Oracle ADVM and ACFS replication. Refer 19c ASM new features for more details.


** Leaf nodes are de-supported as part of Oracle Flex Cluster architecture from 19c.


[Blog] Oracle WebLogic Administration: Supported Maximum Availability Architectures (MAA)

Online Apps DBA - Wed, 2019-02-20 07:22

  Supported Maximum Availability Architectures (MAA) is a multi-datacenter solutions that provides continuous availability to protect an Oracle WebLogic Server system against downtime across multiple data-centers… Want to know more about MAA? Visit: https://k21academy.com/weblogic24 and consider our new blog covering: ✔ Active-Active Application Tier with Active-Passive Database Tier and its key aspects ✔ Active-Passive Application […]

The post [Blog] Oracle WebLogic Administration: Supported Maximum Availability Architectures (MAA) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Business-Critical Cloud Adoption Growing yet Security Gaps Persist, Report Says

Oracle Press Releases - Wed, 2019-02-20 07:00
Press Release
Business-Critical Cloud Adoption Growing yet Security Gaps Persist, Report Says Oracle and KPMG study finds that confusion over cloud security responsibilities, lack of visibility and shadow IT complicate corporate security

REDWOOD SHORES, Calif. and NEW YORK—Feb 20, 2019

Companies continue to move business critical workloads and their most sensitive data to the cloud, yet security challenges remain, according to the second annual Oracle and KPMG Cloud Threat Report 2019 released today. The report found that 72 percent of respondents feel the public cloud is more secure than what they can deliver in their own data center and are moving data to the cloud, but visibility gaps remain that can make it hard for businesses to understand where and how their critical data is handled in the cloud.

The survey also found a projected 3.5 times increase in the number of organizations with more than half of their data in the cloud from 2018 to 2020, and 71 percent of organizations indicated that a majority of this cloud data is sensitive, up from 50 percent last year. However, the vast majority (92 percent) noted they are concerned about employees following cloud policies designed to protect this data.

The report found that the mission-critical nature of cloud services has made cloud security a strategic imperative. Cloud services are no longer nice-to-have tertiary elements of IT—they serve core functions essential to all aspects of business operations. The 2019 report identified several key areas where the use of cloud service can present security challenges for many organizations.

  • Confusion about the shared responsibility security model has resulted in cybersecurity incidents. Eighty-two percent of cloud users have experienced security events due to confusion over the shared responsibility model. While 91 percent have formal methodologies for cloud usage, 71 percent are confident these policies are being violated by employees, leading to instances of malware and data compromise.
  • CISOs are too often on the cloud security sidelines. Ninety percent of CISOs surveyed are confused about their role in securing a Software as a Service (SaaS) versus the cloud service provider environment.
  • Visibility remains the top security challenge. The top security challenge identified in the survey is detecting and reacting to security incidents in the cloud, with 38 percent of respondents naming it as their top challenge today. Thirty percent cited the inability of existing network security controls to provide visibility into cloud-resident server workloads as a security challenge.
  • Rogue cloud application use and lack of security controls put data at risk. Ninety-three percent of respondents indicated they are still dealing with “shadow IT”—in which employees use unsanctioned personal devices and storage or file share software for corporate data. Half of organizations cited lack of security controls and misconfigurations as common reasons for fraud and data exposures. Twenty-six percent of organizations cited unauthorized use of cloud services as their biggest cybersecurity challenge today.

“The world’s most important workloads are moving to the cloud, heightening the need for a coordinated, integrated and layered security strategy,” said Kyle York, vice president of product strategy, Oracle Cloud Infrastructure. “Starting with a cloud platform built for security and applying AI to safeguard data while also removing the burden of administrative tasks and patching removes complexity and helps organizations safeguard their most critical asset—their data.”

“As organizations continue to transition their cyber security thinking from strictly risk management to more of a focus on business innovation and growth, it is important that enterprise leaders align their business and cyber security strategies,” said Tony Buffomante, U.S. Leader of KPMG LLP’s Cyber Security Services. “With cloud services becoming an integral part of business operations, there is an intensified need to improve the security of the cloud and to integrate cloud security into the organization’s broader strategic risk mitigation plans.”

Additional Key Findings
  • Automation may improve chronic patching problems: Fifty-one percent surveyed report patching has delayed IT projects and 89 percent of organizations want to employ an automatic patching strategy.
  • Machine learning may help decrease threats: Fifty-three percent are using machine learning to decrease overall cyber security threats, while 48 percent are using a Multi-factor Authentication (MFA) solution to automatically trigger a second factor of authentication upon detecting anomalous user behavior.
  • Supply chain risk: Business-critical services must be contained as supply chain compromise has led to the introduction of malware in 49 percent of cases, followed by unauthorized access of data in 46 percent of cases.
  • Security events continue to increase while shared responsibility confusion expands: Only 1 in 10 organizations can analyze more than 75 percent of their security event data and 82 percent of cloud users have experienced security events due to confusion over cloud shared responsibility models.
  • Cloud adoption has expanded the core-to-edge threat model: An increasingly mobile workforce accessing both on premise and cloud-delivered applications and data dramatically complicates how cybersecurity professionals must think about their risk and exposure. In 2018, the number one area of investment was training, but this year, training slipped to number two and was replaced by edge-based security controls (e.g., WAF, CASB, Botnet/DDoS Mitigation controls).

To find out more about the Oracle and KPMG Cloud Threat Report 2019, visit Oracle at the RSA Conference, March 4-8 in San Francisco. (Booth #1559 – Moscone South).

About the Report

The Oracle and KPMG Cloud Threat Report 2019 examines emerging cyber security challenges and risks that businesses are facing as they embrace cloud services at an accelerating pace. The report provides leaders around the globe and across industries with important insights and recommendations for how they can help ensure that cyber security is a critical business enabler. The data in the report is based on a survey of 450 cyber security and IT professionals from private and public-sector organizations in North America (United States and Canada), Western Europe (United Kingdom), and Asia (Australia, Singapore).

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Michael Rudnick
KPMG LLP
+1.201.307.7398
mrudnick@kpmg.com
Christine Curtin
KPMG LLP
+1.201.307.8663
ccurtin@kpmg.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About KPMG LLP

KPMG LLP, the audit, tax and advisory firm (www.kpmg.com/us), is the independent U.S. member firm of KPMG International Cooperative ("KPMG International"). KPMG International’s independent member firms have 197,000 professionals working in 154 countries. KPMG International has been named a Leader in the Forrester Research Inc. report, The Forrester Wave™ Information Security Consulting Services Q3 2017. Learn more at www.kpmg.com/us. Some or all of the services described herein may not be permissible for KPMG audit clients and their affiliates.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Michael Rudnick

  • +1.201.307.7398

Christine Curtin

  • +1.201.307.8663

Oracle Exposes “DrainerBot” Mobile Ad Fraud Operation

Oracle Press Releases - Wed, 2019-02-20 06:00
Press Release
Oracle Exposes “DrainerBot” Mobile Ad Fraud Operation Millions of Consumer Devices May Be Infected; Apps Can Drain 10GB Data/Month; Joint Work of Teams from Oracle’s Moat and Dyn Acquisitions Led to Discovery

Redwood City, CA—Feb 20, 2019

Oracle today announced the discovery of and mitigation steps for “DrainerBot,” a major mobile ad fraud operation distributed through millions of downloads of infected consumer apps. Infected apps can consume more than 10GB of data per month downloading hidden and unseen video ads, potentially costing each device owner a hundred dollars per year or more in data overage charges.

DrainerBot was uncovered through the joint efforts of Oracle technology teams from its Moat and Dyn acquisitions. Now part of the Oracle Data Cloud, Moat offers viewability, invalid traffic (IVT), and brand safety solutions, while Dyn enables DNS and security capabilities as part of Oracle Cloud Infrastructure.

The DrainerBot code appears to have been distributed via an infected SDK integrated into hundreds of popular consumer Android apps and games[1] like "Perfect365," "VertexClub," “Draw Clash of Clans,” “Touch ‘n’ Beat – Cinema,” and “Solitaire: 4 Seasons (Full).” Apps with active DrainerBot infections appear to have been downloaded by consumers more than 10 million times, according to public download counts.

Information About DrainerBot
  • DrainerBot is an app-based fraud operation that uses infected code on Android devices to deliver fraudulent, invisible video ads to the device.
  • The infected app reports back to the ad network that each video advertisement has appeared on a legitimate publisher site, but the sites are spoofed, not real.
  • The fraudulent video ads do not appear onscreen in the apps (which generally lack web browsers or video players) and are never seen by users.
  • Infected apps consume significant bandwidth and battery, with tests and public reports indicating an app can consume more than 10 GB/month of data or quickly drain a charged battery, even if the infected app is not in use or in sleep mode.
  • The SDK being used in the affected apps appears to have been distributed by Tapcore, a company in the Netherlands.
  • Tapcore claims to help software developers monetize stolen or pirated installs of their apps by delivering ads through unauthorized installs, although fraudulent ad activity also takes place after valid app installs.
  • On its website, Tapcore claims to be serving more than 150 million ad requests daily and says its SDK has been incorporated into more than 3,000 apps.
  Supporting Quotes

“Mobile app fraud is a fast-growing threat that touches every stakeholder in the supply chain, from advertisers and their agencies to app developers, ad networks, publishers, and, increasingly, consumers themselves,” said Mike Zaneis, CEO of the Trustworthy Accountability Group (TAG). “These types of fraud operations cross all four of TAG’s programmatic pillars, including fraud, piracy, malware, and transparency, and preventing such operations will require unprecedented cross-industry collaboration. As the ad industry’s leading information-sharing body, we are delighted to work with Oracle to educate and inform TAG’s membership about this emerging threat.”

“DrainerBot is one of the first major ad fraud operations to cause clear and direct financial harm to consumers,” said Eric Roza, SVP and GM of Oracle Data Cloud. “DrainerBot-infected apps can cost users hundreds of dollars in unnecessary data charges while wasting their batteries and slowing their devices. We look forward to working with companies across the digital advertising ecosystem to identify, expose, and prevent this and other emerging types of ad fraud.”

“Mobile devices are a prime target with a number of potential infection vectors, which are growing increasingly complicated, interconnected, and global in nature,” said Kyle York, VP of product strategy, Oracle Cloud Infrastructure. “The discovery of the DrainerBot operation highlights the benefit of taking a multi-pronged approach to identifying digital ad fraud by combining multiple cloud technologies. Bottom line is both individuals and organizations need to pay close attention to what applications are running on their devices and who wrote them."

Resources

Detailed information and mitigation resources for DrainerBot can be found at info.moat.com/drainerbot, including:

  • Information and advice for consumers on identifying potentially-infected apps on their devices, as well as general device security tips;
  • Access to a list of app IDs that have shown DrainerBot activity; (Note: Not all apps listed may currently be infected)
  • Access to the DrainerBot SDK, as well as related documentation;
  • Access to sample infected APKs for use by antivirus and security providers to identify and mitigate the DrainerBot threat.
 

Oracle Data Cloud’s Moat Analytics helps top advertisers and publishers measure and drive attention across trillions of ad impressions and content views, so they can avoid invalid traffic (IVT), improve viewability, and better protect their media spend. Among those solutions, Pre-Bid by Moat helps marketers identify and utilize ad inventory that meets their high standards for IVT, third-party viewability, and brand safety.

Oracle Cloud Infrastructure edge services (formerly Dyn) offer managed Web Application Security, DNS, and Internet Intelligence services that help companies build and operate a secure, intelligent cloud edge, protecting them from a complex and evolving cyberthreat landscape.

[1] All of the apps identified have recently generated fraudulent DrainerBot impressions identified by Moat Analytics.

Contact Info
Shasta Smith
Oracle
+1.503.560.0756
shasta.smith@oracle.com
About Oracle Data Cloud

Oracle Data Cloud helps marketers use data to capture consumer attention and drive results. Used by 199 of the 200 largest advertisers, our Audience, Context and Measurement solutions extend across the top media platforms and a global footprint of more than 100 countries. We give marketers the data and tools needed for every stage of the marketing journey, from audience planning to pre-bid brand safety, contextual relevance, viewability confirmation, fraud protection, and ROI measurement. Oracle Data Cloud combines the leading technologies and talent from Oracle’s acquisitions of AddThis, BlueKai, Crosswise, Datalogix, Grapeshot, and Moat.

About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is an enterprise Infrastructure as a Service (IaaS) platform. Companies of all sizes rely on Oracle Cloud to run enterprise and cloud native applications with mission-critical performance and core-to-edge security. By running both traditional and new workloads on a comprehensive cloud that includes compute, storage, networking, database, and containers, Oracle Cloud Infrastructure can dramatically increase operational efficiency and lower total cost of ownership. For more information, visit https://cloud.oracle.com/iaas.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Shasta Smith

  • +1.503.560.0756

DOAG day “database migration” in Mannheim at 19.02.2019

Yann Neuhaus - Wed, 2019-02-20 02:12

Yesterday I attended DOAG conference in Mannheim about migrating Oracle databases.

First presentation was about challenges about migrating to multitenant databases. With Oracle 20 it is probably not possible anymore to create a non CDB database or to upgrade from a non CDB database. So in the next years all databases have to be migrated to multitenant architecture. Problems with licensing, different charactersets were covered and some migration methods to PDB were shown.
In the second lecture causes of failures of database migrations were shown which were not caused by database technology itself but by surrounding technologies and human errors.
In the third speech a new migration method with RMAN incremental backups in combination with transportable tablespaces was shown, this is very interesting for migration of big databases.
Also a migration method with duplicate command with noopen option was presented.

Last but not least an entertaining show about migration projects was hold, the lessons learned were presented in Haiku (Japanese poem form).

Cet article DOAG day “database migration” in Mannheim at 19.02.2019 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator