Feed aggregator

Date constraint to validate all bookings are in the future

Tom Kyte - 9 hours 20 min ago
Hi, I need to create a constraint which will not allow appointments to be booked in the past. I wonder if someone could help me with this please. Thank you. Juliana
Categories: DBA Blogs

Monitoring parallel excution of FULL table scan

Tom Kyte - 9 hours 20 min ago
Hi I'm on 12.2 EE on Win 2016 I have the following SQL which selects from a 550 GB table (yes, it is GB due to massive GDPR logging) <code> create table GFAUDIT.fga_log$_kopi_201809 as select /*+ PARALLEL (8)*/ (select instance_name from v...
Categories: DBA Blogs

REGEXP_LIKE Statement

Tom Kyte - 9 hours 20 min ago
Why is this statement returning value - <code>select * from ( SELECT 'AAaaanders4n' name FROM dual ) WHERE REGEXP_LIKE (name, '^[A]{1}');</code> I have given {1} in regexp_like, still this statement returns 'AAaaanders4n'
Categories: DBA Blogs

Speed of Light

Bobby Durrett's DBA Blog - Wed, 2019-03-20 16:30

Looking at cloud databases has me thinking about the speed of light. Wikipedia says that the speed of light is about 186,000 miles per second. If my calculations are correct that is 5.37 microseconds per mile. The United States is about 2680 miles wide so it would take light about 14.4 milliseconds to cross the US. If I ping one of my favorite web sites it takes tens of milliseconds to ping so that kind of makes sense because those sites are in other cities and I am going through various routers. I did some tests with my company’s storage and found that reading from our storage when the data is cached in the storage server takes around 200 microseconds. That is 200 microseconds for a round trip. I’m sure that our database servers and storage are a lot less than a mile apart so most of that time has nothing to do with the speed of light. I heard about a cloud vendor whose fast network connection took 100 microseconds plus the speed of light. I guess 100 microseconds is the cost of getting your data to fiber and light does the rest. If your cloud database was on the other side of the country, I guess it could take 14 milliseconds each way at least for each SQL request. If the cloud database was in your own city and say 10 miles away that would only tack on about 53.7 microseconds each way to the 100 microseconds overhead. I guess it makes sense. Maybe 100 microseconds plus the speed of light is the cost of moving data in the best case?

Bobby

Categories: DBA Blogs

Reintroducing Terri Noyes

Steven Chan - Wed, 2019-03-20 11:33

I am pleased to reintroduce you to Terri Noyes, a highly experienced and versatile member of the Oracle E-Business Suite Technology product management team.

Terri joined Oracle in 1991, and has held a variety of engineering management and cross-functional leadership positions. Terri started her software engineering career at the General Electric R&D Center in Schenectady, New York, and then joined MIT Lincoln Laboratories in Lexington, MA before moving to California for Oracle. She holds an M.S. in Computer Science and Computer Engineering from Rensselaer Polytechnic Institute, and a B.S. in Computer Science and Mathematics from the University of Massachusetts.

Terri is currently focused closely on assisting our customers at all stages of their journeys to using EBS on Oracle Cloud. As such, she is one of the key contributors to our sister Oracle E-Business Suite and Oracle Cloud blog, writing and editing articles on a regular basis. So if you a reader of that blog, you are already familiar with her work.

Having written for the EBS Technology Blog on numerous occasions in the past, Terri will be now doing so again on a regular basis. Don't miss what she has to say!

Terri lives in the Boston area with her family and two dogs, Ringo and Charly (loosely named after famous British drummers!)

Here are a couple of Terri's recent Cloud Blog posts:

Related Articles
Categories: APPS Blogs

Supported Platform Guide/UTA Pack Guide

Anthony Shorten - Wed, 2019-03-20 09:44

As with all Oracle products we release supported platforms in the Installation documentation. But, as platform changes happen between releases Oracle products produce Certification and Supported platform documentation in My Oracle Support.

Oracle Utilities is no different and we publish the up to date information on an article within My Oracle Support. It is recommended to use this article as a reference as the Installation Guides may become stale in respect to platforms.

The article for Oracle Utilities provides a spreadsheet centralizing all the information including certified platforms, certified database release and even versions of Oracle Utilities products supported with content in the Oracle Utilities Accelerator.

The article is available at Certification Matrix for Oracle Utilities Products (Doc Id: 1454143.1) available from My Oracle Support.

 

Partitioning -- 13d : TRUNCATE and DROP Partitions and Global Indexes

Hemant K Chitale - Wed, 2019-03-20 07:11
A TRUNCATE or DROP Partition makes Global Indexes on a Partitioned Table UNUSABLE.

You may be lucky if the target partition was empty, resulting in Oracle maintaining Global Indexes as valid.  However, the accepted rule is that you either (a) use the UPDATE INDEXES clause [resulting in the TRUNCATE or DROP taking longer to run, effectively locking the table partitions] OR  (b) do a REBUILD of the Indexes that become UNUSABLE after the TRUNCATE or DROP.

12c has introduced what it calls Asynchronous Global Index Maintenance.  With this feature present, the TRUNCATE or DROP runs much faster as a DDL without actually removing the target rows from the Global Indexes [but still requires the UPDATE INDEXES clause to be specified]

So, now in my 12.2 database I have these two Indexes on SALES_DATA :

SQL> select index_name, partitioned, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by 2,1
5 /

INDEX_NAME PAR STATUS
------------------------------ --- --------
SALES_DATA_PK NO VALID
SALES_DATA_LCL_NDX_1 YES N/A

SQL>


I then TRUNCATE a non-empty Partition and check the Indexes

SQL> alter table sales_data truncate partition P_2015 update indexes;

Table truncated.

SQL>
SQL> select index_name, partitioned, status, orphaned_entries
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by 2,1
5 /

INDEX_NAME PAR STATUS ORP
------------------------------ --- -------- ---
SALES_DATA_PK NO VALID YES
SALES_DATA_LCL_NDX_1 YES N/A NO

SQL>


The ORPHANED_ENTRIES column indicates that SALES_DATA_PK is subject to Asynchronous Index Maintenance.

This is the job that will do the Index Maintenance at 2am  :

SQL> l
1 select owner, job_name, last_start_date, next_run_Date
2 from dba_scheduler_jobs
3* where job_name = 'PMO_DEFERRED_GIDX_MAINT_JOB'
SQL> /

OWNER
---------------------------------------------------------------------------
JOB_NAME
---------------------------------------------------------------------------
LAST_START_DATE
---------------------------------------------------------------------------
NEXT_RUN_DATE
---------------------------------------------------------------------------
SYS
PMO_DEFERRED_GIDX_MAINT_JOB
20-MAR-19 10.18.51.215433 AM UTC
21-MAR-19 02.00.00.223589 AM UTC


SQL> !date
Wed Mar 20 20:05:24 SGT 2019

SQL>


So, I could
(1) wait for the next run of the job OR
(2) manually trigger the job (which will scan the entire database for all indexes that require such maintenance) OR
(3) Execute  DBMS_PART.CLEANUP_GIDX  to initiate the maintenance for the specific index OR
(4) Execute an ALTER INDEX REBUILD to make the Index USABLE again.

SQL> execute dbms_part.cleanup_gidx('HEMANT','SALES_DATA');

PL/SQL procedure successfully completed.

SQL> select index_name, partitioned, status, orphaned_entries
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by 2,1
5 /

INDEX_NAME PAR STATUS ORP
------------------------------ --- -------- ---
SALES_DATA_PK NO VALID NO
SALES_DATA_LCL_NDX_1 YES N/A NO

SQL>


Note that the argument to CLEANUP_GIDX is the *Table Name*, not an Index Name.


Here I have demonstrated a TRUNCATE Partition, but the same method would be usable for a DROP Partition.




Categories: DBA Blogs

Generate number based on start and end columns.

Tom Kyte - Wed, 2019-03-20 06:46
Generate value based on start and end columns without using procedure. How to modify the select query. <i>select key_column, start_point, end_point FROM tab1 WHERE key_column='10254';</i> key_column start_point end_point 10254 -2 ...
Categories: DBA Blogs

How to recover the whole database with RMAN Backup

Tom Kyte - Wed, 2019-03-20 06:46
Hi Team, First off all a big Thanks for your supports Now i wanna know the steps to recover a fully operational database with RMAN backup. I haven't done this scenario before,So i am going for a Test case here. My requirement is 1) I have dat...
Categories: DBA Blogs

cannot access objects in different schema

Tom Kyte - Wed, 2019-03-20 06:46
I am the admin user and can create tables and procedures in any schema. I have few tables in Schema B which I am referencing in a package i am creating in Schema A however upon compiling it does not see the tables in Schema B. Schema B does not ha...
Categories: DBA Blogs

Virtual columns in Oracle 11g

Tom Kyte - Wed, 2019-03-20 06:46
hi tom what is virtual column in 11g. Why oracle has introduce it. Can you give us its possible usages. regards Amir Riaz
Categories: DBA Blogs

Podcast: Polyglot Programming and GraalVM

OTN TechBlog - Tue, 2019-03-19 23:15

How many programming languages are there? I won’t venture a guess. There must be dozens, if not hundreds. The 2018 State of the Octoverse Report from Github identified the following as the top ten most popular languages among GitHub contributors:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C++
  1. C#
  2. TypeScript
  3. Shell
  4. C
  5. Ruby

So the word “polyglot” definitely describes the world of the software coder.

Polyglot programming is certainly nothing new, but as the number of languages grows, and as language preferences among coders continue to evolve, what happens to decisions about which language to use in a particular project? In this program we'll explore the meaning and evolution of polyglot programming, examine the benefits and challenges of mixing and matching different languages, and then discuss the GraalVM project and its impact on polyglot programming.

This is Oracle Groundbreakers Podcast #364. It was recorded on Monday February 11, 2019. Time to listen...

The Panelists Listed alphabetically Roberto Cortez Roberto Cortez
Java Champion
Founder and Organizer, JNation
Twitter LinkedInJava Champion Dr. Chris Seaton Dr. Chris Seaton, PhD
Research Manager, Virtual Machine Group, Oracle Labs
Twitter LinkedIn Oleg Selajev Oleg Selajev
Lead Developer Advocate, GraalVM, Oracle Labs
Twitter LinkedIn  Additional Resources Coming Soon
  • Dmitry Kornilov, Tomas Langer, Jose Rodriguez, and Phil Wilkins discuss the ins, outs, and practical applications of Helidon, the lightweight Java microservices framework.
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelists, please post a comment. We'll get back to you right away.

PostgresConf 2019 Training Days

Jeremy Schneider - Tue, 2019-03-19 18:47

It feels like PostgresConf in New York is in full swing, even though the main tracks haven’t even started yet!

(Oh, and by the way, as of this morning I heard there are still day-passes available for those who haven’t yet registered for the conference… and then you can come hear a great session about Wait Events in PostgreSQL this Thursday at 4:20pm!)

The first two days of PostgresConf are summits, tutorials and training sessions. A good chunk of my day today was helping out with Scott Mead’s intensive 3 hour hands-on lab Setting up PostgreSQL for Production in AWS – but outside of that I’ve managed to drop in to a number of other sessions that sounded interesting. I did my best to take down some notes so I could share a few highlights.

Monday March 18

Personally, my favorite session on Monday was Brent Bigonger’s session.  He’s a database engineer at Amazon who was involved in migrating their Inventory Management System to Aurora PostgreSQL. I always love hearing good stories (part of why I’ve always been a fan of user groups) – this presentation gave a nice high level overview of the business, a review of the planning and execution process for the migration, and lots of practical lessons learned.

  • Some of the tips were things people are generally familiar with – like NULLs behaving differently and the importance of performance management with a tool like Performance Insights.
  • My favorite tip is getting better telemetry by instrumenting SQL with comments (SELECT /* my-service-call-1234 */ …) which reminded me of something I also read in Baron Sc​hwartz’s recently updated e-book on observable systems: “including implicit data in SQL.”
  • A great new tip (to me) was the idea of creating a heartbeat table as one more safety check in a replication process.  You can get a sense for lag by querying the table and you can also use it during a cutover to get an extra degree of assurance that no data was missed.
  • Another general point I really resonated with: Brent gave a nice reminder that a simple solution which meets the business requirements is better than a sophisticated or complex solution that goes beyond what the business really needs.  I feel tempted on occasion to leverage architectures because they are interesting – and I always appreciate hearing this reiterated!

On the AWS track, aside from Brent’s session, I caught a few others: Jim Mlodgenski giving a deep dive on Aurora PostgreSQL architecture and Jim Finnerty giving a great talk on Aurora PostgreSQL performance tuning and query plan management.  It’s funny, but I think my favorite slide from Finnerty’s talk was actually one of the simplest and most basic; he had a slide that just had high-level list of steps for performance tuning.  I don’t remember the exact list on that slide at the moment, but the essential process: (1) identify to top SQL (2) EXPLAIN to get the plan (3) make improvements to the SQL and (4) test and verify whether the improvements actually had the intended effect.

Other sessions I dropped into:

  • Alvaro Hernandez giving an Oracle to PostgreSQL Migration Tutorial.  I love live demos (goes along with loving hands on labs) and so this session was a hit with me – I wasn’t able to catch the whole thing but I did catch a walk-through of ora2pg.
  • Avinash Vallarapu giving an Introduction to PostgreSQL for Oracle and MySQL DBAs. When I slipped in, he was just wrapping up a section on hot physical backups in PostgreSQL with the pg_basebackup utility.  After that, Avi launched into a section on MVCC in PostgreSQL – digging into transaction IDs and vacuum, illustrated with block dumps and the pageinspect extension.  The part of this session I found most interesting was actually a few of the participant discussions – I heard lively discussions about what extensions are and about comparisons with RMAN and older versions of Oracle.
Tuesday March 19

As I said before, a good chunk of my morning was in Scott’s hands-on lab. If you ever do a hands-on lab with Scott then you’d better look out… he did something clever there: somewhere toward the beginning, if you followed the instructions correctly, then you would be unable to connect to your database!  Turns out this was on purpose (and the instructions actually tell you this) – since people often have this particular problem connecting when they first start on out RDS, Scott figured he’d just teach everyone how to fix it.  I won’t tell you what the problem actually is though – you’ll have to sign up for a lab sometime and learn for yourself.  :)

As always, we had a lot of really interesting discussions with participants in the hands-on lab.  We talked about the DBA role and the shared responsibility model, about new tools used to administer RDS databases in lieu of shell access (like Performance Insights and Enhanced Monitoring), and about how RDS helps implement industry best practices like standardization and automation. On a more technical level, people were interested to learn about the “pgbench” tool provided with postgresql.

In addition to the lab, I also managed to catch part of Simon Riggs’ session Essential PostgreSQL11 Database Administration – in particular, the part about PostgreSQL 11 new features.  One interesting new thing I learned was about some work done specifically around the performance of indexes on monotonically increasing keys.

Interesting Conversations

Of course I learned just as much outside of the sessions as I learned in the sessions.  I ended up eating lunch with Alexander Kukushkin who helped facilitate a 3 hour hands-on session today about Understanding and implementing PostgreSQL High Availability with Patroni and enjoyed hearing a bit more about PostgreSQL at Zalando. Talked with a few people from a government organization who were a long-time PostgreSQL shop and interested to hear more about Aurora PostgreSQL. Talked with a guy from a large financial and media company about flashback query, bloat and vacuum, pg_repack, parallel query and partitioning in PostgreSQL.

And of course lots of discussions about the professional community. Met PostgresConf conference volunteers from California to South Africa and talked about how they got involved in the community.  Saw Lloyd and chatted about the Seattle PostgreSQL User Group.

The training and summit days are wrapping up and now it’s time to get ready for the next three days: keynotes, breakout sessions, exposition, a career fair and more!  I can’t wait.  :)

New utility Python scripts for DBAs

Bobby Durrett's DBA Blog - Tue, 2019-03-19 14:45

I pushed out three new Python scripts that might be helpful to Oracle DBAs. They are in my miscpython repository.

Might be helpful to some people.

Bobby

Categories: DBA Blogs

Predict tablespace growth for next 30 days

Tom Kyte - Tue, 2019-03-19 12:26
How to Predict tablespace growth for next 30 days need to configure using oem any possible solutions
Categories: DBA Blogs

Oracle Analytics Cloud (OAC) training with Rittman Mead

Rittman Mead Consulting - Tue, 2019-03-19 11:36

Rittman Mead have today launched it's new Oracle Analytics Cloud (OAC) Bootcamp. Run on OAC, the course lasts four days and covers everything you need to know in order to manage your Cloud BI platform and assumes no prior knowledge up-front.

As the course is modular, you are able to choose which days you'd like to attend. Day 1 covers an OAC overview, provisioning, systems management, integration and security. Day 2 covers RPD Modelling and Data Modeller. Day 3 is devoted to creating reports, dashboards, alerts and navigation. Day 4 covers content creation using Oracle Data Visualization.

Book here: https://www.rittmanmead.com/training-schedule/

Got a team to train? You can also have our OAC Bootcamp delivered on-site at your location. For more information and prices contact training@rittmanmead.com

Categories: BI & Warehousing

Updated EBS R12 Tools Requirements for IBM AIX on Power Systems

Steven Chan - Tue, 2019-03-19 08:48

Beginning on May 3rd 2019, Oracle E-Business Suite application tier patches for Releases 12.2, 12.1, and 12.0 on IBM AIX on Power Systems will be built with version 12.1 of the IBM XL C/C++ compiler, which includes new runtime and utilities packages. Customers who plan to apply patches built after May 3rd 2019 to their Oracle E-Business Suite environment must first ensure they meet all the new requirements.

This change is needed because IBM is discontinuing support of the previous XL compiler (version 11) that was formerly used for building Oracle E-Business Suite12 patches.

We are announcing this change now to give our customers time to prepare for, and be compliant with, all requirements for patches generated after the May 3rd 2019 date.

We recommend you review the following documentation for all new requirements relating to the IBM AIX on Power Systems platform:

Related Articles
Categories: APPS Blogs

Oracle Cloud Helps Drive Efficiency and Innovation for MGM Resorts International

Oracle Press Releases - Tue, 2019-03-19 08:00
Press Release
Oracle Cloud Helps Drive Efficiency and Innovation for MGM Resorts International Entertainment trailblazer moves to cloud applications to support growing and diversifying business demands

MODERN BUSINESS EXPERIENCE, Las Vegas, NV—Mar 19, 2019

 

MGM Grand

Oracle Cloud Applications will support MGM Resorts International’s fast-changing business needs as the entertainment company integrates systems and streamlines operations to drive efficiency and innovation across the enterprise.

MGM Resorts needed a flexible, scalable and secure business platform that would allow it to rapidly adapt to changing demands and maintain its competitive advantage. After carefully evaluating Oracle’s capabilities, MGM Resorts chose Oracle Cloud Applications.

“We selected Oracle Cloud Applications because of the company’s proven track-record moving large complex organizations to more agile cloud technology. With our core business processes in the cloud, we can rapidly evolve our offerings and experiences,” said Kelly Litster, senior vice president of Strategic Initiatives for MGM Resorts International.

With Oracle Enterprise Resource Planning (ERP) Cloud and Oracle Enterprise Performance Management (EPM) Cloud, MGM Resorts will be able to modernize processes, improve finance agility and make strategic, data-based decisions. Oracle Supply Chain Management (SCM) Cloud will enable MGM Resorts to manage its supply chain at scale, with continuous innovation.

“The entertainment industry is at the forefront of the experience economy, pioneering new methods to meet and exceed changing customer expectations,” said Rondy Ng, senior vice -president, Oracle Applications Development. “By moving to the industry leading Oracle ERP Cloud, MGM Resorts will be able to drive operational efficiency across the organization and continually take advantage of the latest innovations to position itself for future growth.”

Oracle is uniquely positioned as the industry’s broadest portfolio of cloud applications, which have garnered significant industry recognition. Oracle ERP Cloud was recently named the only Leader in Gartner’s Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises. It was also named a leader in Gartner’s Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises and was positioned in the Leaders quadrant for the 2018 Magic Quadrant for Cloud Financial Close Solutions.

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About MGM Resorts International

MGM Resorts International (NYSE: MGM) is an S&P 500® global entertainment company with national and international locations featuring best-in-class hotels and casinos, state-of-the-art meetings and conference spaces, incredible live and theatrical entertainment experiences, and an extensive array of restaurant, nightlife and retail offerings. MGM Resorts creates immersive, iconic experiences through its suite of Las Vegas-inspired brands. The MGM Resorts portfolio encompasses 29 unique hotel and destination gaming offerings including some of the most recognizable resort brands in the industry. Expanding throughout the U.S. and around the world, the company acquired the operations of Empire City Casino in New York in 2019, and in 2018, opened MGM Springfield in Massachusetts, MGM COTAI in Macau, and the first Bellagio-branded hotel in Shanghai. The over 82,000 global employees of MGM Resorts are proud of their company for being recognized as one of FORTUNE® Magazine's World's Most Admired Companies®. For more information visit us at www.mgmresorts.com.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Modern Business Experience 2019

Modern Business Experience arms professionals across HR, finance, and supply chain with the tools to create a connected enterprise and thrive in the Experience Economy. For more information, please visit https://www.oracle.com/modern-business-experience/

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

It’s Time to Rethink the Customer Experience: Oracle CX Cloud Updates

Oracle Press Releases - Tue, 2019-03-19 07:55
Blog
It’s Time to Rethink the Customer Experience: Oracle CX Cloud Updates

By Rob Tarkoff, EVP and GM, Oracle CX Cloud—Mar 19, 2019

Rob Tarkoff

Change is accelerating. Expectations are skyrocketing. Nothing is predictable. The clock is ticking. The one truly finite resource is slipping through our fingers.

But what if Father Time was on our side? What if we had the time to stop and think about what’s truly possible? What if we had the time to make every single customer interaction really matter? And what if we could give time back to our prospects and customers?

I ask because CX is a race against time. It’s no longer about cutting cost or revenue acceleration; instead it’s about the ability to recapture time. And this is not just a speed and efficiency game. It’s about the time to respond, time to purchase, time to anticipate, and time to learn. It’s about looking at the world through your customers’ eyes and asking yourself: Is every experience that my brand delivers worth the time a customer invests with me?

Time is the currency of the Experience Economy. It’s now. It’s urgent. And it’s the context that has shaped the latest updates to the Oracle Customer Experience (CX) Cloud, which we are announcing this week. Let me dive in.

Data Science Investments Help Sales Teams Master the New Science of Sales

Long gone is the idea of a linear “customer journey.” The world simply doesn’t operate that way anymore. Instead, there’s a new science to sales, and the new data and artificial intelligence updates built into Oracle CX Cloud will help our customers master it. Our new Sales Planning offering brings advanced data science to commission modeling, sales forecasting, and territory segmentation—allowing data and algorithms to optimize revenue generation. Also, new integrations with Oracle DataFox, which provide clean, accurate and enriched B2B data to power AI and machine learning capabilities, will help sales teams expand their total addressable market and further increase efficiency. 

Marketing Cloud Updates Help Make Every Customer Interaction Unique

No one wants to be a “target.” No one wants to be “marketed” to. We all know this is true. It wastes our time. That’s why the latest updates to the Oracle Marketing Cloud enable marketers to go beyond traditional approaches to audience targeting and segmentation. By leveraging rich, contextual behavioral insights from across channels, the updates will help marketers take advantage of real-time, contextual customer data to deliver a seamless and hyper-personalized experience every time they interact with customers.

 

Oracle Service Logistics Cloud Brings Service and Supply Chain Teams Together

We don’t want to press 4 for customer service. We don’t want to be put on hold and transferred to the customer service team. We don’t have the time to search for service, and that’s why we are introducing Oracle Service Logistics Cloud. It brings together customer field service and supply chain teams to capture, diagnose, and resolve customer issues quickly and in the most cost-effective way possible. And this solution is an industry first.

Expanded CX Ecosystem: New Slack Integration and Lots More  

As customer expectations change, the way we work together is changing as well. To help bring teams together so everyone can focus on the customer, we are introducing new integrations with Slack. The integrations are another industry first and will help sales and customer service professionals improve collaboration and increase productivity. It’s really cool stuff and is just one example of how we continue to work with partners to help our customers easily take advantage of emerging technologies and meet the demands of the Experience Economy.

In addition to all the different updates, this is a time to learn. At Modern Customer Experience in Vegas this week, we are bringing together thousands of customer experience professionals from across our ecosystem to explore revolutionary concepts, create amazing new experiences, and discuss the power of Oracle CX Cloud. And if you really want to learn from the best of the best, be sure to check out the 2019 Markie Awards.

It’s your time. It’s our time. It’s exciting. It’s scary. But imagine if you will, a world where experiences are ambient, and at the same time, invisible. A world where time is the currency of great customer experiences. It’s where we are going as an industry. It’s a radically different way of thinking about how we interact with customers. Let’s lead it together.

 

Pages

Subscribe to Oracle FAQ aggregator