Feed aggregator

Car Logos

iAdvise - Mon, 2016-01-25 21:02
Symbols and elaborate images for car logos can be confusing. So many famous brands use the same animals or intricate images that may seem appealing at first but are actually so similar to each other that you can't tell one company apart from the other unless you're really an expert in the field.

How many auto brands do you know that have used a jungle cat or a horse or a hawk's wings in their trademark?

There're just too many to count.

So how can you create a design for your automobile company that is easy to remember and also sets you apart from the crowd?

Why not use your corporation name in the business mark?

How many of us confuse the Honda trademark with Hyundai's or Mini's with Bentley's?

But that won't happen if your car logos and names are the same.

Remember the Ford and BMW's business image or MG's and Nissan's? The only characteristic that makes them easier to remember is their company name in their brand mark.

Car LogosCar LogosCar LogosCar LogosCar LogosCar LogosCar LogosCar Logos

But it's not really that easy to design a trademark with the corporation name. Since the only things that can make your car brand mark appealing are the fonts and colors, you need to make sure that you use the right ones to make your logo distinct and easy to remember.

What colors to use?

When using the corporation name in trademark, the rule is very simple. Use one solid color for the text and one solid color for the background. Text in silver color with a red or a dark blue background looks appealing but you can experiment with different colors as well. You can also use white colored text on a dark green background which will make your design identifiable from afar. Don't be afraid to use bright colored background but make sure you use the text color that complements the background instead of contrasting with it.

What kind of fonts to use?

Straight and big fonts may be easier to read from a distance but the font style that looks intricate and appealing to the customers and give your design a classic look are the curvier fonts. But make sure that the text is not too curvy that it loses its readability. You can even use the Times Roman font in italic effect or use some other professional font style with curvy effect to make sure that the text is readable and rounded at the same time.

Remember the ford logo? It may just be white text on a red background, but it's the curvy font style that sets it apart from the rest. Remember the Ford business mark or the smart car logo?

What shapes to use?

The vehicle business image has to be enclosed in a shape, of course. The shape that is most commonly used is a circle. You can use an oval, a loose square or even the superman diamond shape to enclose your design. But make sure that your chosen shape does not have too many sides that make the mark complicated.

The whole idea of a car corporation mark is to make it easily memorable and recognizable along with making it a classic. Using the above mentioned ideas can certainly do that for your trademark.

Beverly Houston works as a Senior Design Consultant at a Professional Logo Design Company. For more information on car logos and names find her competitive rates at Logo Design Consultant.
Categories: APPS Blogs

Kafka and more

DBMS2 - Mon, 2016-01-25 05:28

In a companion introduction to Kafka post, I observed that Kafka at its core is remarkably simple. Confluent offers a marchitecture diagram that illustrates what else is on offer, about which I’ll note:

  • The red boxes — “Ops Dashboard” and “Data Flow Audit” — are the initial closed-source part. No surprise that they sound like management tools; that’s the traditional place for closed source add-ons to start.
  • “Schema Management”
    • Is used to define fields and so on.
    • Is not equivalent to what is ordinarily meant by schema validation, in that …
    • … it allows schemas to change, but puts constraints on which changes are allowed.
    • Is done in plug-ins that live with the producer or consumer of data.
    • Is based on the Hadoop-oriented file format Avro.

Kafka offers little in the way of analytic data transformation and the like. Hence, it’s commonly used with companion products. 

  • Per Confluent/Kafka honcho Jay Kreps, the companion is generally Spark Streaming, Storm or Samza, in declining order of popularity, with Samza running a distant third.
  • Jay estimates that there’s such a companion product at around 50% of Kafka installations.
  • Conversely, Jay estimates that around 80% of Spark Streaming, Storm or Samza users also use Kafka. On the one hand, that sounds high to me; on the other, I can’t quickly name a counterexample, unless Storm originator Twitter is one such.
  • Jay’s views on the Storm/Spark comparison include:
    • Storm is more mature than Spark Streaming, which makes sense given their histories.
    • Storm’s distributed processing capabilities are more questionable than Spark Streaming’s.
    • Spark Streaming is generally used by folks in the heavily overlapping categories of:
      • Spark users.
      • Analytics types.
      • People who need to share stuff between the batch and stream processing worlds.
    • Storm is generally used by people coding up more operational apps.

If we recognize that Jay’s interests are obviously streaming-centric, this distinction maps pretty well to the three use cases Cloudera recently called out.

Complicating this discussion further is Confluent 2.1, which is expected late this quarter. Confluent 2.1 will include, among other things, a stream processing layer that works differently from any of the alternatives I cited, in that:

  • It’s a library running in client applications that can interrogate the core Kafka server, rather than …
  • … a separate thing running on a separate cluster.

The library will do joins, aggregations and so on, and while relying on core Kafka for information about process health and the like. Jay sees this as more of a competitor to Storm in operational use cases than to Spark Streaming in analytic ones.

We didn’t discuss other Confluent 2.1 features much, and frankly they all sounded to me like items from the “You mean you didn’t have that already??” list any young product has.

Related links

Categories: Other

ServletContextAware Controller class with Spring

Pas Apicella - Mon, 2016-01-25 03:55
I rarely need to save state within the Servlet Context via an application scope, but recently I did and here is what your controller class would look like to get access to the ServletConext with Spring. I was using Spring Boot 1.3.2.RELEASE.

In short you implement the "org.springframework.web.context.ServletContextAware" interface as shown below. In this example we retrieve an application scope attribute.
  
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.context.ServletContextAware;

import javax.servlet.ServletContext;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

@Controller
public class CommentatorController implements ServletContextAware
{
private static final Logger log = LoggerFactory.getLogger(CommentatorController.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

private ServletContext context;

public void setServletContext(ServletContext servletContext)
{
this.context = servletContext;
}

@RequestMapping(value="/", method = RequestMethod.GET)
public String listTeams (Model model)
{
String jsonString = (String) context.getAttribute("riderData");
List<Rider> riders = new ArrayList<>();

if (jsonString != null)
{
if (jsonString.trim().length() != 0)
{
Map<String, Object> jsonMap = parser.parseMap(jsonString);
List<Object> riderList = (List<Object>) jsonMap.get("Riders");

for (Object rider: riderList)
{
Map m = (Map) rider;
riders.add(
new Rider((String)m.get("RiderId"),
(String)m.get("Cadence"),
(String)m.get("Speed"),
(String)m.get("HeartRate")));
}

//log.info("Riders = " + riders.size());
model.addAttribute("ridercount", riders.size());
}
}
else
{
model.addAttribute("ridercount", 0);
}

model.addAttribute("riders", riders);

return "commentator";

}
}
Categories: Fusion Middleware

Packt - Time to learn Oracle and Linux

Surachart Opun - Sat, 2016-01-23 00:01
What is your resolution for learning? Learn Oracle, Learn Linux or both. It' s a good news for people who are interested in improving Oracle and Linux skills. Packt Promotional (discount of 50%) for eBooks & videos from today until 23rd Feb, 2016. 

 XM6lxr0 for Oracle

 ILYTW for Linux
Categories: DBA Blogs

Formatting a Download Link

Scott Spendolini - Fri, 2016-01-22 14:38
Providing file upload and download capabilities has been native functionality in APEX for a couple major releases now. In 5.0, it's even more streamlined and 100% declarative.
In the interest of saving screen real estate, I wanted to represent the download link in an IR with an icon - specifically fa-download. This is a simple task to achieve - edit the column and set the Download Text to this:
<i class="fa fa-lg fa-download"></i>
The fa-lg will make the icon a bit larger, and is not required. Now, instead of a "download" link, you'll see the icon rendered in each row. Clicking on the icon will download the corresponding file. However, when you hover over the icon, instead of getting the standard text, it displays this:
2016 01 13 16 28 16
Clearly not optimal, and very uninformative. Let's fix this with a quick Dynamic Action. I placed mine on the global page, as this application has several places where it can download files. You can do the same or simply put on the page that needs it.
The Dynamic Action will fire on Page Load, and has one true action - a small JavaScript snippet:
$(".fa-download").prop('title','Download File');
This will find any instance of fa-download and replace the title with the text "Download File":
2016 01 13 16 28 43
If you're using a different icon for your download, or want it to say something different, then be sure to alter the code accordingly.

If you use Internet Explorer, change is coming for you in Oracle Application Express 5.1

Joel Kallman - Thu, 2016-01-21 13:45
With the ever-changing browser landscape, we needed to make some tough decisions as to which browsers and versions are going to be deemed "supported" for Oracle Application Express.  There isn't enough time and money to support all browsers and all versions, each with different bugs and varying levels of support of standards.

A position that's been adopted for the Oracle Cloud services and products is to support the current version of a browser and the prior major release.  We are adopting this same standard for Oracle Application Express beginning with Oracle Application Express 5.1.  This will most likely have the greatest impact on those people who use Microsoft Internet Explorer. 

Beginning with Oracle Application Express 5.1, the planned minimum version of Internet Explorer to both build and deploy applications, will be Internet Explorer 11.  I say "planned", because it's possible (but unlikely) that Microsoft releases a new browser version prior to the release of Oracle Application Express 5.1.

Granted, even Microsoft themselves has already dropped support for any version of IE before Internet Explorer 11.  And with no security fixes planned for any version of IE prior to Internet Explorer 11, hopefully this will be enough to encourage all users of IE to adopt IE 11 as their minimum version.

Oracle APEX development and multiple developers/branches

Joel Kallman - Thu, 2016-01-21 11:53
Today, I observed an exchange inside of Oracle about a topic that comes up from time to time.  And it has to do with the development of APEX applications, and how you manage this across releases and a larger number of developers.  This topic tends to vex some teams when they start working with Oracle Application Express on broader development projects, especially when people are not accustomed to a hosted declarative development model.  I thought Koen Lostrie of Oracle Curriculum Development provided a brilliant response, and it was worth sharing with the broader APEX community.

Alec from Oracle asked:
"Are there any online resources that discuss how to work with APEX with multiple developers and multiple branches of development for an application?  Our team is using Mercurial to do source control management now. The basic workflow is that there are several developers who are working on mostly independent features.  There are production, staging, development, and personal versions of the application code.  Developers implement bug fixes or new features and those get pushed to the development version.  Certain features from development get approved to go to staging and pushed.  Those features in staging may be rolled back or promoted to go on to production.  Are there resources which talk about implementing such a workflow using APEX?  Or APEX instructors to talk to about this workflow?"
And to which I thought Koen gave a very clear reply, complete with evidence of how they are successfully managing this today in their Oracle Curriculum Development team.  Koen said:

"I think a lot of teams struggle with what you are describing because of the nature of APEX source code and Database-based development.  I personally think that the development flow should be adapted to APEX rather than trying to use an existing process and apply that for APEX.

Let me explain how we do it in our team:

  • We release patches to production every 3 weeks. We have development/build/stage and production and use continuous integration to apply patches on build and stage.
  • We use an Agile-based process. At the start of each cycle we determine what goes in the patch.
  • Source control is done on Oracle Developer Cloud Service (ODCS)  – we use git and source tree. We don’t branch.
  • All developers work directly on development (the master environment) for bugs/small enhancement requests. We use the BUILD OPTION feature of APEX to prevent certain functionality from being exposed in production. This is a great feature which allows developer to create new APEX components in development but the changes are not visible in the other environments.
  • For big changes like prototypes, a developer can work on his own instance but this rarely happens. It is more common for a developer to work on a copy of the app to test something out. Once the change gets approved. it will go into development.

From what I see in the process you describe, the challenge in your process is that new changes get pulled back after they have made it to stage. This is a very expensive step. The developers need to roll back their changes to an earlier state which is a very time consuming process. And… very frustrating for the individual developer.  Is this really necessary ? Can the changes not be reviewed when in development ? Because that is what is proposed in the Agile methodology: the developer talks directly to the person/team that requests the new feature and they review as early as on development.  In our case stage is only for testing changes. We fix bugs when the app is in stage, but we  don’t roll back features once they are in stage – worst case we can delay the patch entirely but that happens very rarely.

There is a good paper available by Rob Van Wijk. He describes how each developer works on his own instance but keeps his environment in sync with the master. In his case too, they’re working on a central master environment. The setup of such an environment is quite complex. You can find the paper here: http://rwijk.blogspot.com/2013/03/paper-professional-software-development.html"

Oracle Critical Patch Update January 2016 E-Business Suite Analysis

To start, the January 2016 Critical Patch Update (CPU) for Oracle E-Business Suite (EBS) is significant and high-risk

First, this CPU with 78 EBS security fixes has 10x the number of EBS security fixes than an average CPU.  For the previous 44 CPUs released since 2005, an average of 7.5 security bugs are fixed per quarter for EBS.  Second, there are a significant number of SQL injection and other high risk bugs, such as the ability to read arbitrary files from the EBS applications servers.  Third, the security bugs are in a wide-range of over 30 technical and functional modules, therefore, every EBS implementation is at significant risk.  Even if you don't have the module installed, configured, or licensed, in almost all cases the vulnerability can still be exploited. Finally, at least 10 security vulnerabilities can be readily exploited in EBS Interface-facing self-service modules.

Integrigy is credited with discovering 40 of the security bugs fixed this quarter.  We have additional security bugs open with Oracle which we except to be resolved in the next few quarters.

Due to the high number of vulnerabilities affecting Oracle E-Business Suite 11.5.10, Oracle changed the stated 11.5.10 support policy for the January 2016 CPU from requiring an Advanced Support Contract (ACS) to being available for all customers with valid support contracts.  For the April 2016 through October 2016 CPUs, Oracle E-Business Suite 11.5.10 CPU patches will only be available for customers with an Advanced Support Contract (ACS).  After October 2016, there will be no more CPUs for 11.5.10.

Vulnerability Breakdown

An analysis of the security vulnerabilities shows the 78 security fixes resolve 35 SQL injection bugs, 17 unauthorized access issues, 9 cross-site scripting (XSS) bugs, 5 XML External Entity (XXE) bugs, and various other security issues and weaknesses.  The most critical are the SQL injection bugs as these may permit unauthenticated web application users to execute SQL as the application database account (APPS).  Many of these SQL injection bugs allow access to sensitive data or the ability to perform privileged functions such as changing application or database passwords, granting of privileges, etc.

Also, several of the bugs allow an attacker with unauthenticated web application access to retrieve arbitrary files from the application server.  With some knowledge of EBS, it may be possible to download files with the APPS database password.

EBS Version Breakdown

23 vulnerabilities are found in all versions of Oracle E-Business Suite.  The remainder are mostly specific to the different web architectures found in each version.  The following is the breakdown of the 78 vulnerabilities by EBS version --

11.5.10 12.0.x 12.1.x 12.2.x 66 38 40 22

For 11.5.10, there are 22 vulnerabilities in web pages implemented using mod_plsql.  mod_plsql is an Oracle specific web architecture where the web application is implemented using database PL/SQL packages.  mod_plsql was removed from EBS starting with 12.0.  For information on mitigating some of the mod_plsql vulnerabilities, see the section below "EBS 11i mod_plsql Mitigation."

Many of the R12 (12.0, 12.1, 12.2) specific vulnerabilities are in Java Server Pages (JSP) and Java servlets, which are not found in 11i.

I have included 12.0.x in the listing of versions to show even though this version is not supported for the January 2016 CPU, a significant number of the security bugs affect this version.

January 2016 Recommendations

As with all Critical Patch Updates, the most effective method to resolve the vulnerabilities is to apply the patches in a timely manner. 

The most at risk implementations are those running Internet facing self-service modules (i.e., iStore, iSupplier, iSupport, etc.) and Integrigy rates this CPU as a critical risk due to the number of SQL injection vulnerabilities that can be remotely exploited without authentication.   These implementations should (1) apply the CPU as soon as possible and (2) ensure the DMZ is properly configured according to the EBS specific instructions and the EBS URL Firewall is enabled and optimized.

If the CPU can not be applied in a timely manner, Integrigy's AppDefend, an application firewall for the Oracle E-Business Suite, should be implemented.  AppDefend provides virtual patching and can effectively replace patching of EBS web security vulnerabilities.

EBS 11i mod_plsql Mitigation

In order to mitigate some mod_plsql security vulnerabilities, all Oracle EBS 11i environments should look at limiting the enabled mod_plsql web pages.  The script /patch/115/sql/txkDisableModPLSQL.sql can be used to limit the allowed pages listed in FND_ENABLED_PLSQL.  This script was introduced in 11i.ATG_PF.H and the most recent version is in 11i.ATG_PF.H.RUP7 or the January 2016 CPU.  This must be thoroughly tested as it may block a few mod_plsql pages used by your organization.  Review the Apache web logs for the pattern '/pls/' to see what mod_plsql pages are actively being used.  This fix is included and implemented as part of the January 2016 CPU.

Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Making Datapump Import Stat-tastically faster

The Anti-Kyte - Wed, 2016-01-20 14:29

I’m determined to adopt a positive mental attitude this year.
When the train company explains delays by saying we have the wrong kind of sunshine, I prefer to marvel at the fact that the sun is shining at all in the depths of an English Winter. Let’s face it, it’s a rare enough phenomenon in the summer.
The slow-running of the train caused by this rare natural phenomenon also gives me more time to write this post.
There’s more “good” news – Datapump Import tends to be rather slow when it comes to applying optimizer statistics.
This is because it insists on doing it one row at a time.
All of which provides us with an opportunity to work from home optimize our import job… by not bothering importing the stats.
“Hang on”, you’re thinking, “won’t that mean you have to re-gather stats after the import, which probably won’t be that quick either ?”

Not necessarily. You just need to think positive…

What I’m going to cover here is :

  • How to save stats to a table
  • Export without the stats
  • Import without stats
  • Applying stats from a table

I’m using 11gR2 Express Edition in the examples that follow.
We’ll start by exporting the HR schema and then import the tables into the HR_DEV schema.

As there are overhead-line problems in the Watford Junction area, we’ve also got time to choose between running the datapump export and import on the command line or via the DBMS_DATAPUMP package.

Saving Stats to a Table

Let’s start by making sure that we have some optimizer stats on the tables in the HR schema :

select table_name, last_analyzed, num_rows
from dba_tab_statistics
where owner = 'HR'
order by table_name
/

TABLE_NAME                     LAST_ANALYZED        NUM_ROWS
------------------------------ ------------------ ----------
COUNTRIES                      13-JAN-16                  25
DEPARTMENTS                    13-JAN-16                  27
EMPLOYEES                      13-JAN-16                 107
JOBS                           13-JAN-16                  19
JOB_HISTORY                    13-JAN-16                  10
LOCATIONS                      13-JAN-16                  23
REGIONS                        13-JAN-16                   4

7 rows selected.

I can see that all of the tables in the schema have stats, which is good enough for my purposes here.
If you find that the LAST_ANALYZED value is null for the tables in your database, or if you just decide that you want to take a less cavalier approach to the relevance of your Optimizer stats, you can update them by running :

begin
    dbms_stats.gather_schema_stats('HR');
end;
/

Now we know we’ve got some stats, we need to save them to a table. This process is made fairly straightforward by DBMS_STATS. To create an appropriately structured table in the HR schema, we simply need to run :

begin
    dbms_stats.create_stat_table( ownname => 'HR', stattab => 'exp_stats');
end;
/

The CREATE_STAT_TABLE procedure creates the table specified in the stattab parameter, in the schema specified in the ownname parameter.

So, we now have a table in HR called EXP_STATS, which looks like this…

desc hr.exp_stats

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 STATID                                             VARCHAR2(30)
 TYPE                                               CHAR(1)
 VERSION                                            NUMBER
 FLAGS                                              NUMBER
 C1                                                 VARCHAR2(30)
 C2                                                 VARCHAR2(30)
 C3                                                 VARCHAR2(30)
 C4                                                 VARCHAR2(30)
 C5                                                 VARCHAR2(30)
 N1                                                 NUMBER
 N2                                                 NUMBER
 N3                                                 NUMBER
 N4                                                 NUMBER
 N5                                                 NUMBER
 N6                                                 NUMBER
 N7                                                 NUMBER
 N8                                                 NUMBER
 N9                                                 NUMBER
 N10                                                NUMBER
 N11                                                NUMBER
 N12                                                NUMBER
 D1                                                 DATE
 R1                                                 RAW(32)
 R2                                                 RAW(32)
 CH1                                                VARCHAR2(1000)
 CL1                                                CLOB


Now we need to populate this table. Once again, we need to use DBMS_STATS…

begin
    dbms_stats.export_schema_stats( ownname => 'HR', stattab => 'exp_stats');
end;
/

…and we can see that we now have some data in the table…

select count(*)
from exp_stats
/

  COUNT(*)
----------
        62

The Export

When it comes to datapump exports, you may reasonably take the view that the best policy is to export everything and then pick and choose what you want from the resultant dump file when importing.

Speaking of the dump file, if you want to find it on the OS, you’ll need to know the location pointed to by the DATA_PUMP_DIR directory object. To find this :

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

If you’re running the datapump utility from the command line…

expdp system/pwd@XE directory=data_pump_dir dumpfile=hr_full_exp.dmp schemas=HR

…where pwd is the password for SYSTEM.

Alternatively, you can use the PL/SQL API as implemented through the DBMS_DATAPUMP package :

declare
    l_dph number;
    l_state varchar2(30) := 'NONE';
    l_status ku$_status;
begin
    l_dph := dbms_datapump.open
    (
        operation => 'EXPORT',
        job_mode => 'SCHEMA',
        job_name => 'HR_FULL_EXP'
    );
    
    -- Just the HR schema...
    dbms_datapump.metadata_filter
    (
        handle => l_dph,
        name => 'SCHEMA_EXPR',
        value => q'[ IN ('HR') ]'
    );
    
    dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_exp.dmp',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_dump_file,
        reusefile => 1
    );
    
    dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_exp.log',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_log_file,
        reusefile => 1
    );
    
    dbms_datapump.log_entry
    (
        handle => l_dph,
        message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS')
    );
    
    dbms_datapump.start_job( handle => l_dph);

    --
    -- Wait for the job to finish...
    --
    while l_state not in ('COMPLETED', 'STOPPED')
    loop
        dbms_datapump.get_status
        (
            handle => l_dph,
            mask => dbms_datapump.ku$_status_job_error +
                dbms_datapump.ku$_status_job_status +
                dbms_datapump.ku$_status_wip,
            timeout => -1,
            job_state => l_state,
            status => l_status
        );
    end loop;
    dbms_datapump.detach( l_dph);
end;
/

After we’ve run this, we can check the log file and see that the EXP_STATS table has been included in the export by checking the export.log file that gets created in the DATA_PUMP_DIR directory…

...
. . exported "HR"."EXP_STATS"                            20.03 KB      62 rows
. . exported "HR"."COUNTRIES"                            6.367 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.007 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."JOBS"                                 6.992 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.476 KB       4 rows
...
Importing without Applying stats

To import the HR tables into the HR_DEV schema, whilst ensuring that datapump doesn’t apply stats…

If you’re using the import command-line utility …

impdp system/pwd@XE directory=data_pump_dir dumpfile=hr_full_exp.dmp remap_schema=HR:HR_DEV exclude=STATISTICS

Alternatively, using DBMS_DATAPUMP…

declare
    l_dph number;
    l_state varchar2(30) := 'NONE';
    l_status ku$_status;
        
begin

    l_dph := dbms_datapump.open
    (
        operation => 'IMPORT',
        job_mode => 'SCHEMA',
        job_name => 'HR_IMP_NO_STATS'
    );

    --
    -- Import HR objects from the export file into the HR_DEV schema
    --    
    dbms_datapump.metadata_remap
    (
        handle => l_dph,
        name => 'REMAP_SCHEMA',
        old_value => 'HR',
        value => 'HR_DEV'
    );
    
    -- Don't import any stats...
    dbms_datapump.metadata_filter
    (
        handle => l_dph,
        name => 'EXCLUDE_PATH_EXPR',
        value => q'[ = 'STATISTICS']'
    );
    
    dbms_datapump.set_parameter
    (
        handle => l_dph,
        name => 'TABLE_EXISTS_ACTION',
        value => 'REPLACE'
    );
    
   dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_exp.dmp',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_dump_file,
        reusefile => 1
    );
    
    dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_imp.log',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_log_file,
        reusefile => 1
    );

    dbms_datapump.log_entry
    (
        handle => l_dph,
        message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS')
    );
    
    dbms_datapump.start_job( handle => l_dph);
 
    -- Wait for the job to finish...
 
    while l_state not in ('COMPLETED', 'STOPPED')
    loop
        dbms_datapump.get_status
        (
            handle => l_dph,
            mask => dbms_datapump.ku$_status_job_error +
                dbms_datapump.ku$_status_job_status +
                dbms_datapump.ku$_status_wip,
            timeout => -1,
            job_state => l_state,
            status => l_status
        );
    end loop;
    dbms_datapump.detach( l_dph);
end;
/    

If we now check, we can confirm that there are indeed, no stats on the tables we’ve just imported…

select table_name, last_analyzed, num_rows
from dba_tab_statistics
where owner = 'HR_DEV'
order by table_name
/

TABLE_NAME                     LAST_ANALYZED        NUM_ROWS
------------------------------ ------------------ ----------
COUNTRIES
DEPARTMENTS
EMPLOYEES
EXP_STATS
JOBS
JOB_HISTORY
LOCATIONS
REGIONS

8 rows selected.


Now for the final touch, apply the stats that we have in the EXP_STATS table. Should be easy enough…

Applying stats from a table

If we were importing into a schema of the same name as we saved stats for, this would be straight forward.
However, in this case, we’re importing into a different schema – HR_DEV.
Therefore, if we want to avoid “leaves-on-the-line”, we need to do a little light hacking.

To make things a bit clearer, let’s have a look at the contents of the C5 column of our EXP_STATS table…

select distinct(c5)
from exp_stats
/

C5
------------------------------
HR


Yes, the table owner (for that is what the C5 column contains) is set to HR. This is reasonable enough as it was the stats for this schema which we saved to the table in the first place. However, this means that the stats will not be applied to the tables in the HR_DEV schema unless we do this…

update exp_stats
set c5 = 'HR_DEV'
where c5 = 'HR'
/

62 rows updated.

commit;

Commit complete.

Now that’s done, we can apply the stats with a call to DBMS_STATS.IMPORT_SCHEMA_STATS…

begin
    dbms_stats.import_schema_stats(ownname => 'HR_DEV', stattab => 'exp_stats');
end;
/

Check again, and the stats are now on the tables :

select table_name, last_analyzed, num_rows
from dba_tab_statistics
where owner = 'HR_DEV'
order by table_name
/

TABLE_NAME                     LAST_ANALYZED        NUM_ROWS
------------------------------ ------------------ ----------
COUNTRIES                      13-JAN-16                  25
DEPARTMENTS                    13-JAN-16                  27
EMPLOYEES                      13-JAN-16                 107
EXP_STATS
JOBS                           13-JAN-16                  19
JOB_HISTORY                    13-JAN-16                  10
LOCATIONS                      13-JAN-16                  23
REGIONS                        13-JAN-16                   4

8 rows selected.


Whilst importing stats separately does entail a few more steps, it does mean that there is rather less hanging around for datapump import to do it’s impression of a train trying to get through “the wrong kind of snow”.


Filed under: Oracle, PL/SQL Tagged: DataPump, dba_tab_statistics, dbms_datapump, dbms_datapump.metadata_filter, dbms_stats, dbms_stats.create_stat_table, dbms_stats.export_schema_stats, dbms_stats.import_schema_stats, EXCLUDE_PATH_EXPR, expdp, impdp, importing stats into a different schema using dbms_stats, remap_schema

Highlight numbers in an APEX Report (SQL and Class)

Dimitri Gielis - Wed, 2016-01-20 09:31
Last year I blogged about highlighting negative numbers in an APEX Report, the CSS only way.
At that time I gave two alternative approaches; by using JQuery or SQL, but it looks like I didn't do those posts yet, till somebody reminded me. This post is about using SQL to highlight something in a report.

Let's say we want to highlight negative numbers in a report (as in the previous post):


We have some CSS defined inline in the Page:

.negative-number {
  color:red;
}

The negative-number class we will add to some values. All the logic to know if it's a negative number will be in SQL. Why SQL you might ask? This example is very simple, but you could call a function which has a lot of complexity to decide if you want to assign a class to a record or not, the principe of this example is more important, that you can use logic in SQL to work with CSS.

The SQL Query of the Report looks like this. Watch for the case statement where we say when to assign a value for the class:

select 
 description,
 amount,
 case 
   when amount < 0
   then 'negative-number'
   else ''
 end as class
from dimi_transaction
order by id

Finally we assign the class to the amount, by adding a span in the HTML Expression of the Amount column:


The Class column you can make Conditional = Never as it's something we just use behind the scenes.

That's how you make a bridge between SQL and CSS.

You can now play more with the case statement and even let the class or style e.g. color, come from a user defined table... unlimited possibilities :)

Categories: Development

January 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-01-19 18:11

Oracle today released the January2016 Critical Patch Update.  With this Critical Patch Update release,the CriticalPatch Update program enters its 11th year of existence(the first Critical Patch Update was released in January 2005).  As areminder, Critical Patch Updates are currently released 4 times a year, on aschedule announced a year in advance.  Oracle recommends that customersapply this Critical Patch Update as soon as possible.

TheJanuary2016 Critical Patch Update provides fixes for a wide range of productfamilies; including: 

  • Oracle Database
    • None of these database vulnerabilities are remotely exploitable without authentication. 
  • Java SE vulnerabilities
    • Oracle strongly recommends that Java home users visit the java.com web site, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed.
  • Oracle E-Business Suite.
    • Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite.

Oracletakes security seriously, and stronglyencourages customers to keep up with newer releases in order tobenefit from Oracle’s ongoing security assurance effort.  

Formore information:

TheJanuary 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html

TheOracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html.

OracleApplications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf.

January 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-01-19 18:11

Oracle today released the January 2016 Critical Patch Update.  With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch Update was released in January 2005).  As a reminder, Critical Patch Updates are currently released 4 times a year, on a schedule announced a year in advance.  Oracle recommends that customers apply this Critical Patch Update as soon as possible.

The January 2016 Critical Patch Update provides fixes for a wide range of product families; including: 

  • Oracle Database
    • None of these database vulnerabilities are remotely exploitable without authentication. 
  • Java SE vulnerabilities
    • Oracle strongly recommends that Java home users visit the java.com web site, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed.
  • Oracle E-Business Suite.
    • Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite.

Oracle takes security seriously, and strongly encourages customers to keep up with newer releases in order to benefit from Oracle’s ongoing security assurance effort.  

For more information:

The January 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html

The Oracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html.

Oracle Applications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Drop table cascade and reimport

Laurent Schneider - Tue, 2016-01-19 12:26

Happy new year &#x1f642;

Today I had to import a subset of a database and the challenge was to restore a parent table without restoring its children. It took me some minutes to write the code, but it would have taken days to restore the whole database.

CREATE TABLE t1(
  c1 NUMBER CONSTRAINT t1_pk PRIMARY KEY);
INSERT INTO t1 (c1) VALUES (1);
CREATE TABLE t2(
  c1 NUMBER CONSTRAINT t2_t1_fk REFERENCES t1,
  c2 NUMBER CONSTRAINT t2_pk PRIMARY KEY);
INSERT INTO t2 (c1, c2) VALUES (1, 2);
CREATE TABLE t3(
  c2 NUMBER CONSTRAINT t3_t2_fk REFERENCES t2,
  c3 NUMBER CONSTRAINT t3_pk PRIMARY KEY);
INSERT INTO t3 (c2, c3) VALUES (2, 3);
CREATE TABLE t4(
  c3 NUMBER CONSTRAINT t4_t3_fk REFERENCES t3,
  c4 NUMBER CONSTRAINT t4_pk PRIMARY KEY);
INSERT INTO t4 (c3, c4) VALUES (3, 4);
COMMIT;

expdp scott/tiger directory=DATA_PUMP_DIR dumpfile=scott.dmp reuse_dumpfiles=y

Now what happen if I want to restore T2 and T3 ?

If possible, I check the dictionary for foreign keys from other tables pointing to T2 and T3.

SELECT constraint_name
FROM user_constraints
WHERE (r_constraint_name) IN (
    SELECT constraint_name
    FROM user_constraints
    WHERE table_name IN ('T2', 'T3'))
  AND table_name NOT IN ('T2', 'T3');

TABLE_NAME                     CONSTRAINT_NAME               
------------------------------ ------------------------------
T4                             T4_T3_FK                      

T4 points to T3 and T4 has data.

Now I can drop my tables with the cascade options

drop table t2 cascade constraints;
drop table t3 cascade constraints;

Now I import, first the tables, then the referential constraints dropped with the cascade clause and not on T2/T3.

impdp scott/tiger tables=T2,T3 directory=DATA_PUMP_DIR dumpfile=scott.dmp

impdp scott/tiger  "include=ref_constraint:\='T4_T3_FK'" directory=DATA_PUMP_DIR dumpfile=scott.dmp

It’s probably possible to do it in one import, but the include syntax is horrible. I tried there

Oracle Database Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle Database.  The CPUs are only available for certain versions of the Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

CPU Supported Database Versions

As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  The final CPU for 12.1.0.1 will be July 2016.  11.2.0.4 will be supported until October 2020 and 12.1.0.2 will be supported until July 2021.

11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015. 

Database CPU Recommendations
  1. When possible, all Oracle databases should be upgraded to 11.2.0.4 or 12.1.0.2.  This will ensure CPUs can be applied through at least October 2020.
     
  2. [12.1.0.1] New databases or application/database upgrade projects currently testing 12.1.0.1 should immediately look to implement 12.1.0.2 instead of 12.1.0.1, even if this will require additional effort or testing.  With the final CPU for 12.1.0.1 being July 2016, unless a project is implementing in January or February 2016, we believe it is imperative to move to 12.1.0.2 to ensure long-term CPU support.
     
  3. [11.2.0.3 and prior] If a database can not be upgraded, the only effective mitigating control for many database security vulnerabilities is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using valid node checking, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.  Direct database access is required to exploit database security vulnerabilities and most often a valid database session is required.
     

Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all Oracle databases. 

 

Oracle Database, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle E-Business Suite Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle E-Business Suite and its technology stack.  The CPUs are only available for certain versions of the Oracle E-Business Suite and Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

For 2016, CPUs for Oracle E-Business Suite will become a significant focus as a large number of security vulnerabilities for the Oracle E-Business Suite will be fixed.  The January 2016 CPU for the Oracle E-Business Suite (EBS) will include 78 security fixes for a wide range of security bugs with many being high risk such as SQL injection in web facing self-service modules.  Integrigy anticipates the next few quarters will have an above average number of EBS security fixes (average is 7 per CPU since 2005).  This large number of security bugs puts Oracle EBS environments at significant risk as many of these bugs will be high risk and well publicized.

Supported Oracle E-Business Suite Versions

Starting with the April 2016 CPU, only 12.1 and 12.2 will be fully supported for CPUs moving forward.  11.5.10 CPU patches for April 2016, July 2016, and October 2016 will only be available to customers with an Advanced Customer Support (ACS) contract.  There will be no 11.5.10 CPU patches after October 2016.  CPU support for 12.0 ended as of October 2015.

11.5.10 Recommendations
  1. When possible, the recommendation is to upgrade to12.1 or 12.2.
  2. Obtaining an Advanced Customer Support (ACS) contract is a short term (until October 2016) solution, but is an expensive option.
  3. An alternative to applying CPU patches is to use Integrigy's AppDefend, an application firewall for Oracle EBS, in proxy mode which blocks EBS web security vulnerabilities.  AppDefend provides virtual patching and can effectively replace patching of EBS web security vulnerabilities.

In order to mitigate some mod_plsql security vulnerabilities, all Oracle EBS 11i environments should look at limiting the enabled mod_plsql web pages.  The script /patch/115/sql/txkDisableModPLSQL.sql can be used to limit the allowed pages listed in FND_ENABLED_PLSQL.  This script was introduced in 11i.ATG_PF.H and the most recent version is in 11i.ATG_PF.H.RUP7.  This must be thoroughly tested as it may block a few mod_plsql pages used by your organization.  Review the Apache web logs for the pattern '/pls/' to see what mod_plsql pages are actively being used.  This fix is included and implemented as part of the January 2016 CPU.

12.0 Recommendations
  1. As no security patches are available for 12.0, the recommendation is to upgrade to 12.1 or 12.2 when possible.
  2. If upgrading is not feasible, Integrigy's AppDefend, an application firewall for Oracle EBS, provides virtual patching for EBS web security vulnerabilities as well as blocks common web vulnerabilities such as SQL injection and cross-site scripting (XSS).  AppDefend is a simple to implement and cost-effective solution when upgrading EBS is not feasible.
12.1 Recommendations
  1. 12.1 is supported for CPUs through October 2019 for implementations where the minimum baseline is maintained.  The current minimum baseline is the 12.1.3 Application Technology Stack (R12.ATG_PF.B.delta.3).  This minimum baseline should remain consistent until October 2019, unless a large number of functional module specific (i.e., GL, AR, AP, etc.) security vulnerabilities are discovered.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
12.2 Recommendations
  1. 12.2 is supported for CPUs through July 2021 as there will be no extended support for 12.2.  The current minimum baseline is 12.2.3 plus roll-up patches R12.AD.C.Delta.7 and R12.TXK.C.Delta.7.  Integrigy anticipates the minimum baseline will creep up as new RUPs (12.2.x) are released for 12.2.  Your planning should anticipate the minimum baseline will be 12.2.4 in 2017 and 12.2.5 in 2019 with the releases of 12.2.6 and 12.2.7.  With the potential release of 12.3, a minimum baseline of 12.2.7 may be required in the future.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
EBS Database Recommendations
  1. As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015.  The final CPU for 12.1.0.1 will be July 2016.
  2. When possible, all EBS environments should be upgraded to 11.2.0.4 or 12.1.0.2, which are supported for all EBS versions including 11.5.10.2.
  3. If database security patches (SPU or PSU) can not be applied in a timely manner, the only effective mitigating control is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using the EBS feature Managed SQLNet Access, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.
  4. Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all EBS databases.
Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Using SKIP LOCKED feature in DB Adapter polling

Darwin IT - Tue, 2016-01-19 04:53
Last few days I spent with describing a Throttle mechanism using the DB Adapter. Today the 'Distributed Polling' functionality of the DB Adapter was mentioned to me, which uses the SKIP LOCKED clausule of the database.

On one of the pages you'll get to check the 'Distributed Polling' option:
Leave it like it is, since it adds the 'SKIP LOCKED' option in the 'FOR UPDATE' clausule.

In my example screendump I set the Database Rows per Transaction, but you might want to set in a sensible higher value with regards to the 'RowsPerPollingInterval' that you need to set yourself in the JCA file:

The 'RowsPerPollingInterval' is not an option in the UI, unfortunately. You might want to set this as a multiple to the MaxTransactionSize (in the UI denoted as 'Database Rows per Transaction').

A great explanation for this functionality is this A-Team blogpost. Unfortunately the link to the documentation about 'SKIP LOCKED' in that post is broken. I found this one. Nice thing is that it suggests using AQ as preferred solution in stead of SKIP LOCKED.

Maybe a better way for throttling is using the AQ Adapter together with the properties

CrossFit and Coding: 3 Lessons for Women and Technology

Usable Apps - Mon, 2016-01-18 17:21

Yes, it’s January again. Time to act on that New Year resolution and get into the gym to burn off those holiday excesses. But have you got what it takes to keep going back?

Here’s Sarahi Mireles (@sarahimireles), our User Experience Developer in Oracle’s México Development Center, to tell us about how her CrossFit experience not only challenges the myths about fierce workouts being something only for the guys but about what that lesson can teach us about coding and women in technology too…

Introducing CrossFit: Me Against Myself

Heard about CrossFit? In case you haven’t, it’s an intense fitness program with a mix of weights, cardio, other exercises, and a lot of social media action too about how much we love doing CrossFit.

CrossFit is also a great way to keep fit and to make new friends. Most workouts are so tough that you’re left all covered in sweat, your muscles are on fire, and you feel like it's going to be impossible to even move the next day.

But you keep doing it anyway. 

One of the things I love most about CrossFit is that it is super dynamic. The Workout of the Day (WOD) is a combination of activities, from running outside, gymnastics, weight training, to swimming. You’re never doing the same thing two days in a row. 

Sounds awesome, right? Well, it is!

But some people, particularly women, unfortunately think CrossFit will make them bulk up and they’ll end up with HUGE muscles! A lot of people on the Internet are saying this, and lots of my friends believe it too: CrossFit is really for men and not women. 

From CrossFit to CrossWIT: Women in Techology (WIT)

Just like with CrossFit, there are many young women who also believe that coding is something meant only for men. Seems crazy, but let's be honest, hiring a woman who knows how to code can be a major challenge (my manager can tell you about that!).

So, why aren't women interested in either coding or lifting weights? Or are they? Is popular opinion the truth, that there are some things that women shouldn't do rather than cannot do?

The reality is that CrossFit won't make you bulk up like a bodybuilder, any more than studying those science, technology, engineering or mathematics (STEM) subjects in school won’t make you any less feminine. Women have been getting the wrong messages about gender and technology from the media and from advertising since we were little girls. We grew up believing that intense workout programs, just like learning computer languages, and about engineering, science and math, are “man’s stuff”. And then we wonder where are the women in technology?!

3 Lessons to Challenge Conventions and Change Yourself

So, wether you are interested in these things, or not, I would like to point out 3 key lessons, based on my experience, that I am sure would help you in some stage of your life: 

  1. Don't be afraid of defying those gender stereotypes. You can become whatever you want to be: a successful doctor, a great programmer, or even a CrossFit professional. Go for it!

  2. Choosing to be or to do something different from what others consider “normal” can be hard, but keep doing it! There are talented women in many fields of work who, despite the stereotypes, are awesome professionals, are respected for what they do, and have become key parts of their organizations and companies. Coding is a world largely dominated by men now, with 70% of the jobs taken by males, but that does not stop us from challenging and changing things so that diversity makes the tech industry a better place for everyone

  3. If you are interested in coding, computer science, or technology in general, keep up with your passion by learning more from others by reading the latest tech blogs, for example. If you don't know where to start, here are some great examples to inspire you: our own VoX, Usable Apps, and AppsLab blogs. Read up about the Oracle Women in Technology (WIT) program too.

I'm sure you'll find something of interest in the work Oracle does and you can use our resources to pursue your interests in a career in technology! And who knows? Maybe you can join us at an Oracle Applications User Experience event in the future. We would love to see you there and meet you in person.

I think you will like what you can become! Just like the gym, don’t wait until next January to start.

Related Links

Pages

Subscribe to Oracle FAQ aggregator