Feed aggregator

Concepts Guide: 9/27 - Process Architecture

Charles Schultz - Fri, 2010-06-04 13:26
"Figure 9-1 can represent multiple concurrent users running an application on the same computer as Oracle. This particular configuration usually runs on a mainframe or minicomputer."

Wow, this section of the documentation must have been recycled for a number of years. =)

Good pictures, descriptions of various processes.

In general, I like the "See also" sections, but I wish the link would go directly to the relevant section of the reference, instead of the top-most TOC page.

This section confused me:
"When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

The statistic DBWR checkpoints displayed by the System_Statistics monitor in Enterprise Manager indicates the number of checkpoint requests completed."

If The CKPT process is responsible for updating the datafile headers and DBWR is responsible for something else (writing blocks to disk), why is the statistic called DBWR checkpoints? That is quite misleading, and perhaps leads to the confusion that spawned the warning about the DBWR in the first place. =)

Both PMON and SMON "check regularly". What is "regularly"?

While there are a lot of good ideas imbedded in Oracle, it is surprising that some of the still have such an antiquated and/or obfuscated interfaced. For example, the job scheduling system. The job queue processes are quite cool, but using them is a pain in the arse. The EMGC GUI is not too shabby, but what really sucks is the API; what about a simple API for those of us who do command-line work? VPD and Streams are the same way (have not yet seen any GUI for VPD). At least Shared Server is a little easier to grasp and implement, but it is still very easy to shoot yourself in the foot.

In terms of performance in the context of Shared Server, would not immediate results from FIRST_ROWS_N operations be queued as well? So it would be possible that queued results would actually return slower than when using a dedicated server?

Overall I found this chapter disappointingly light on details, or examples for that matter. I would love to see the program flow, end-to-end, of requesting, establishing, executing and concluding a transaction. Likewise, the last few sections (under "The Program Interface") don't really say much at all - it is most useful as a dictionary or appendix, nothing really that describes what things are or how they work, or the role they play in the larger picture. I mean, they do a little, but not a whole lot.

Shameless boasting

Tony Andrews - Fri, 2010-06-04 05:44
I hate to boast but...StackOverflow has become one of my favourite forums for reading and sometimes answering Oracle-related questions (though it covers all programming topics in fact).Today I am the first person ever to be awarded the Oracle badge for having earned 1000 upvotes for my answers to questions with the Oracle tag:Of course, this may just mean I have too much time on my hands...Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com3http://tonyandrews.blogspot.com/2010/06/shameless-boasting.html

Berkeley DB Java Edition High Availability Performance Whitepaper

Charles Lamb - Thu, 2010-06-03 07:53

Over the past few months we've been working on measuring the impact of HA on JE performance when running on large configurations. The results are documented in a whitepaper that I wrote.

vmForce - adding new age features to the application platform

Vikas Jain - Thu, 2010-06-03 00:08
As VmWare and Force.com joined hands to create the vmForce platform for cloud applications it's interesting to note how some of the new age features are becoming part and parcel of the application infrastructure.

Few years back, an application server with servlet, EJB containers, connection pooling and other services was considered to be an application platform. Then with the SOA wave, features like orchestration (BPEL), service bus (for routing, transformation), adapters (for connecting apps), and governance tools became part of the platform leading to development of composite applications.
Now, vmForce is taking it another step ahead including features such as social apps like collaboration, google like search for any data, mobile access, BPM and reporting dashboards to be part of the platform, relieving application developers and administrators from integration pains with external tools providing these features.

Following vmForce feature list is extracted from Anshu's blogpost on this topic.
  • Social Profiles: Who are the users in this application so I can work with them?
  • Status Updates: What are these users doing? How can I help them and how can they help me?
  • Feeds: Beyond
    user status updates, how can I find the data that I need? How can this
    data come to me via Push? How can I be alerted if an expense report is
    approved or a physician is needed in a different room?
  • Content Sharing: How
    can I upload a presentation or a document and instantly share it in a
    secure and managed manner with the right set of co-workers?
  • Search: Ability to search any and all data in your enterprise apps
  • Reporting: Ability to create dashboards and run reports, including the ability to modify these reports
  • Mobile: Ability to access business data from mobile devices ranging from BlackBerry phones to iPhones
  • Integration: Ability to integrate new applications via standard web services with existing applications
  • Business Process Management: Ability to visually define business processes and modify them as business needs evolve
  • User and Identity Management:
    Real-world applications have users! You need the capability to add,
    remove, and manage not just the users but what data and applications
    they can have access to
  • Application Administration: Usually an afterthought, administration is a critical piece once the application is deployed

Connecting Salesforce.com from Google AppEngine using OAuth

Vikas Jain - Wed, 2010-06-02 23:47
Here's a blogpost on how to connect and authenticate salesforce.com from an application deployed on Google AppEngine using OAuth protocol.

See how the complexity of the OAuth protocol has been hidden by the helper APIs of OAuthAccessor and OauthHelperUtils.
Refer to this demo project written by Jeff Douglas.

Force.com security

Vikas Jain - Wed, 2010-06-02 23:37
You can find resources and links to Force.com platform security for secure cloud development here.

What I like is how it's organized - complete with education material, security design principles, secure coding guidelines, security testing tools, and how to perform security review - providing end to end guidance on how to implement security for apps deployed on Force.com.

JHeadstart custom search with default values

JHeadstart - Sun, 2010-05-30 00:15

JHeadstart has a powerful generator for ADF Faces pages. One of the features of the generator is the generation of search functionality into a page. The search functionality offers a Quick Search with a single search item and an Advanced Search for multiple search criteria. When generating search functionality with JHeadstart 11g a design decision has to be made whether to use the ADF Model or JHeadstart custom approach.

With the ADF Model approach the Quick- and/or Advanced Search is defined in the View Object using Query Criteria. The generated page uses a single component (<af:query> or <af:quickQuery>) that renders a complete search area. This approach is recommended if you

The JHeadstart custom approach uses meta-information in JHeadstart to generate the search areas. There is no need to specify something on the View Object. The generated search areas are composed by more page components which can be flexible arranged. This approach is recommended if you

  • have special layout requirements like organizing search items in groups using tabs
  • need to use the JHeadstart custom List Of Values because of your LOVs need multi-selection, detail disclosure or special layout requirements
  • want to keep you metadata in the JHeadstart application definition instead of in the ADF Business Components
  • want to customize your search

Often it would be nice to have initial values for the search items, but this is not supplied by JHeadstart by default. For customization reason I am using JHeadstart Search custom approach to add this functionality.

Based on the idea of the article by Lucas Jellema Using default search values on the JHeadstart advanced search the solution described here shows how to implement the feature of default values for search items. In this solution default values can be specified for both Quick- and Advanced Search. An additional "Default" button is added into the search region to manually apply the default values. In addition the default values are always set when the task flow starts e.g. when be called from the menu. The final pages will look like these:

Quick Search with Default button: JHSSearchDefaultQS.png

Advanced Search with Default button:JHSSearchDefaultAS.png

The demo application is based on the employee data of the HR demo database schema. Here are the steps to implement it.

  1. Create a new Fusion Web Application (ADF) and define an ADF BC Model with only one updateable Viewobject EmployeesView based on the table EMPLOYEES.

  2. Enable JHeadstart on the ViewController project and create a new Service Definition. Choose the following settings:

    • Quick Search Style=JHeadstart - Dropdown List
    • Advanced Search Style=JHeadstart - Same Page
    • Always use JHeadstart LOVs in JHeadstart search area: Checked

    For the other options accept the default values of the wizard.

  3. Define a default value for the item DepartmentId when creating new rows. This is not the definition for the default value of a search item but it will be used later for demo supplying a default value from the generated bean. Open the JHS Application Definition Editor navigate to group Employees and select DepartmentId in Items. Enter 50 to the property "Default Display Value".

  4. Generate the application and run it to see everything is working.

  5. Now it is time to add search with default values functionality. In this solution the default values will be supplied as managed properties of the search bean generated by JHeadstart. A new custom generator template needs to be registered on the Employees group in the application definition:

      SEARCH_BEAN = custom/misc/facesconfig/EmployeesSearchBean.vm

    In the custom generator template the new Search bean class and the managed properties are defined:


    Defining the managed properties as a Map allows to use the item name as key and the default value as value of the map entry. The item name is the name of the associated control binding generated for the page fragment. The value can be a constant or a reference to another bean. The Advanced Search default value for EmployeesDepartmentId shows an example how to reference a value from the default value bean for new rows.

    The new enhanced search bean has to supply managed properties to hold the default values and a method to apply the default values. The new class will extend the JHeadstart search bean class JhsSearchBean.

    package com.jhssearchdemo.controller.jsf.bean;
    public class SearchBeanWithDefaultValues extends JhsSearchBean {
        // Maps of default values
        private Map quickSearchDefaultValues;
        private Map advancedSearchDefaultValues;
        ...  // getter and setters
         * Appy default values to searchBean
        public void applyDefaultSearchValues() {
            sLog.debug("enter applyDefaultSearchvalues");
            Map criteria = super.getCriteria();
            sLog.debug("current criteria:" + criteria);
            // clear all search criterias
            DCBindingContainer container = getBindingsInternal();
            if (advancedSearchDefaultValues != null) {
                // Apply default values for advanced search
                sLog.debug("set advanced search items:" + 
                for (String searchItemName :
                     (Set)advancedSearchDefaultValues.keySet()) {
                    // copy default value only to items that exists
                    if (findSearchControlBinding(container, searchItemName) !=
                        null) {
                        Object asItemValue =
                        sLog.debug("set default value " + asItemValue +
                                   " for advsearch item: " + searchItemName);
                        criteria.put(searchItemName, asItemValue);
                    } else {
                        sLog.warn("search item for default value doesn't exist: " +
                // super.getCriteria().putAll(advancedSearchDefaultValues);
            // Apply default value for simple search
            if (quickSearchDefaultValues != null &&
                !quickSearchDefaultValues.isEmpty()) {
                // get first key from Quicksearch Map
                Set keyset = quickSearchDefaultValues.keySet();
                Iterator keyIter = keyset.iterator();
                String qsItem = (String)keyIter.next();
                sLog.debug("previous search item was " + getSearchItem());
                if (findSearchControlBinding(container, qsItem) != null) {
                    Object qsItemValue = quickSearchDefaultValues.get(qsItem);
                    // set Quicksearch item
                    sLog.debug("set quicksearch item: " + qsItem);
                    if (qsItemValue != null && !"".equals(qsItemValue)) {
                        sLog.debug("set quicksearch item value: " + qsItemValue);
                } else {
                    sLog.warn("search item for default value doesn't exist: " +
            sLog.debug("exit applyDefaultSearchvalues");

    The method applyDefaultSearchValues() clears the existing search criteria, looks up and applies the default search values from the managed properties. Take a note on the annotation @PostConstruct. JSF will fire the method applyDefaultSearchValues() when all managed properties are set. As the Search Bean is pageflow-scoped the method will be automatically applied every time the pageflow is entered like from a menu item. This way the default search values will be already set when entering the page. The method findSearchControlBinding() checks if the specified item name is valid (has a control binding with the same name).

  6. To manually invoke the method applyDefaultSearchValues() add a "Default" Button next to the Search button of the Advanced Search. The button has an actionListener to invoke the new method of the search bean. In the JHeadstart Application Definition Editor customize the template DO_ADVANCED_SEARCHED_BUTTON at group (Employees) level. Add the following at the end of the new template custom/button/doAdvancedSearchDefaultButton.vm:

    <af:commandButton textAndAccessKey="#{nls['FIND_DEFAULT']}"

    The same way create a new customized template DO_QUICK_SEARCH_BUTTON (custom/button/doQuickSearchDefaultButton.vm):

    <af:commandButton textAndAccessKey="#{nls['FIND_DEFAULT']}"
  7. Add the button label to the application resource file. Locate the resource file ViewController\src\com\jhssearchdemo\view\ApplicationResources_en.properties and add a new entry:

  8. Add the actionListener applyDefaultSearchValues to the search bean class. Note that there is already a method applyDefaultSearchValues in the search bean class but not with the rigtht signature for an action listener. Adding a small wrapper will help:

         * Wrapper to set default values with action listener
         * @param evt
        public void applyDefaultSearchValues(ActionEvent evt) {
  9. Generate the application again, compile and run it. Navigate to the Employees tab to see the new button "Default" in the Quick Search. Also the default search item "Lastname" and value "O*" is shown. Change the search item and/or value and apply the new "Default" button to initialize the search again. Execute the search to verify that it works.

    The same functionality is available in Advanced Search. Once the search is initialized either manually or by the annotation both Quick Search and Advanced search is defaulted. The reason is that the method applyDefaultSearchValues() set the default values for both Quick Search and Advanced Search. If this is not desired the method can be splitted into default values for Quick Search and Advanced Search. Two separate action listeners can invoke these methods.

That's it. You can download a sample workspace that illustrates the solution here.

Categories: Development

Corporate Dashboards

Michael Armstrong-Smith - Fri, 2010-05-28 12:43
During a recent consulting engagement I was asked about dashboards and where one should begin when the boss comes in and says I want a dashboard. I decided what I needed to do was step back and look at the dashboard concept, then explain my understanding in simple terms. I share those thoughts here and invite your comments.

Dashboards are unique to an organization and what works in one place will not be suitable in another. But of course, it all depends on your definition of a dashboard. The one that I like and the one that keeps me out of mischief is this one:

A dashboard or dash board is a panel located under the windscreen containing indicators and dials such as the tachometer / speedometer and odometer. I bet you never thought it was so easy.

Seriously, look again at this definition and you will see the foundations of business dashboards. It is not the dials such as the tachometer, odometer and fuel gauge that are important. It is not the numbers either.

What is really important is the meaning or significance (aka the KPI) that is applied to the numbers. Thus, depending upon the situation, a speed of 100 mph might be considered excessive, particularly if being chased by an irate police officer down a busy city street. Do the same thing on a race track and you might be considered a menace for going too slow. But do 100 mph on an autobahn in Germany and no-one will bat an eyelid because it is perfectly acceptable. You can see that the gauge, in this case the speedometer and the 100 mph reading, is by itself meaningless as a KPI. It is only when you apply the criteria which states that 100 mph must be highlighted in red because it is excessive that a real KPI is born.

The concepts of dashboards in automobiles and in business are the same - they give us a snapshot of critical information at a moment in time. If you happen to be running out of fuel the dashboard will bring this fact to your attention. It does this by turning on a light or sounding a bell when a certain low point in the fuel tank is reached.

The vehicle dashboard needs to provide enough pertinent information so that informed decisions can be made as to how the vehicle is functioning.

Business dashboards need to provide enough pertinent information to the manager or executive so that they can make informed decisions as to how the department or company is functioning. Just like with a vehicle, a corporate dashboard needs to provide all of the critical information that is needed to run the organization's daily operations.

Most corporate dashboards are a snapshot in time, typically midnight, that tell an organization if it is spending cash too fast; or whether the percentage of patients who needed a repeat visit is higher than 5%; or whether the number of requests for service this week exceeded the number from last week by more than 10%. The common factor here is that a rule is being applied to the data to indicate that something needs to be brought to someone's attention.

In a business, you can imagine that every employee has a steering wheel and an accelerator pedal. However, it is not necessary that everyone gets the same dashboard. Since the user roles are different not everyone needs the same level and kind of information. The worker bees need to work, the managers need to manage, and the executives need to improve their golf handicap. Typically, higher executives want to manage by exception and will only become really interested when something out of the ordinary happens.

If an organization is truly managing by exception then it should have a goal to move routine work from the manager to the employee, thus leaving the manager more time to manage. By creating a dashboard that displays the KPIs that the manager is interested in, a quick glance to see that all is green is all that is needed. Good KPIs, and thus good dashboards, reduce micromanagement which is good for everyone involved.

Now that reminds me, golf anyone!

MDL import/export

Klein Denkraam - Fri, 2010-05-28 03:58

Some time ago, when we moved from OWB 9 to 10, I noticed the size of OWB exports into mdl files dramatically decreased. That was a good thing in that it was easier to distribute the mdl files e.g. to production sites. It it also meant you could not read (or edit) the contents of the mdl files any more. I always assumed the new mdl format was some kind of binary optimized format, but today I read this blog from the OWB guys. And it turns out that the ‘new’ format is just a normal zip file containing 2 files. And that you can specify the export to be done in the ‘old’ text format. Text you can edit!

It could be a means to move/copy one or more mappings from one project to another project. Not easy as you must ‘hack’ the mdl file, but it can be done. Neat.

Follow @ORCL_Linux on Twitter

Sergio's Blog - Thu, 2010-05-27 02:06
We've created the following Twitter handles for those of you who like your Oracle Linux and virtualization news in micro chunks * [@ORCL_Linux](http://twitter.com/ORCL_linux) * [@ORCL_virtualize](http://twitter.com/ORCL_virtualize)
Categories: DBA Blogs

Composite Interval Partitioning isn't as advertised.

Claudia Zeiler - Wed, 2010-05-26 18:37

Oracle® Database VLDB and Partitioning Guide 11g Release 1 (11.1) Part Number B32024-01 says:

Interval Partitioning

Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. You must specify at least one range partition.

You can create single-level interval partitioned tables as well as the following composite partitioned tables:

* Interval-range

* Interval-hash

* Interval-list

Sure, I can create these composite partitions, but the results aren't particularly useful. When I tried. Oracle spread my results nicely between the two hash subpartitions for the manually defined partition, but put everything in the same subpartition for the interval generated partition. Notice that these are identical sets of rows. The only difference is the key to force them into the manually specified partition or the generated partition. I assume that there is a metalink note on this somewhere.

I got equivalent results for interval-list composite partitioning. I won't bore the reader with the step-by-step for that test since the results are also that all rows in the generated partitions are forced into one subpartition.

Here are my results for the interval hash test:

SQL> create table interval_hash (
N number,
N2 number
partition by range(N) interval (2)
(partition p1 values less than (2)

Table created.


FOR i IN 1 .. 15 LOOP

INSERT INTO interval_hash VALUES (5, i);
INSERT INTO interval_hash VALUES (0, i);


PL/SQL procedure successfully completed.

SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_HASH', granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> SELECT table_name, partition_name, subpartition_name, num_rows
FROM user_tab_subpartitions
ORDER by table_name, partition_name, subpartition_name;

-------------------- -------------------- -------------------- ----------
INTERVAL_HASH P1..................P_1........................... 6
INTERVAL_HASH P1..................P_2........................... 9
INTERVAL_HASH SYS_P138......SYS_SUBP137............15

(I am having tabbing problems in blogger. I hope that my added lines of dots don't confuse too much)

SQL> select * from interval_hash subpartition(p_2) order by n2;

N N2
---------- ----------
0 1
0 3
0 4
0 7
0 9
0 10
0 12
0 14
0 15

9 rows selected.

SQL> select * from interval_hash subpartition(p_1) order by n2;

N N2
---------- ----------
0 2
0 5
0 6
0 8
0 11
0 13

6 rows selected.

SQL> select * from interval_hash subpartition(SYS_SUBP137) ORDER BY N2;

N N2
---------- ----------
5 1
5 2
5 3
5 4
5 5
5 6
5 7
5 8
5 9
5 10
5 11
5 12
5 13
5 14
5 15

15 rows selected.

How to upgrade your Dell’s BIOS directly from Ubuntu

Eduardo Rodrigues - Mon, 2010-05-24 23:27
I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?! Many people...

This is a summary only. Please, visit the blog for full content and more.

How to upgrade your Dell’s BIOS directly from Ubuntu

Java 2 Go! - Mon, 2010-05-24 23:27
I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?! Many people...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Micromanaging Memory Consumption

Eduardo Rodrigues - Mon, 2010-05-24 22:34
by Eduardo Rodrigues As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management,...

This is a summary only. Please, visit the blog for full content and more.

Micromanaging Memory Consumption

Java 2 Go! - Mon, 2010-05-24 22:34
by Eduardo Rodrigues As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management,...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

That's a whole lot of partitions!

Claudia Zeiler - Mon, 2010-05-24 19:26
Playing with interval partitioning...
I create the simplest table possible and insert 3 rows - generating 3 partitions.

SQL> create table d1 (dt date)
2 partition by range (dt) interval (numtoyminterval(1,'MONTH'))

Table created.

SQL> insert into d1 values (to_date('07/04/1776', 'MM/DD/YYYY'));

1 row created.

SQL> insert into d1 values (to_date('09/22/1862', 'MM/DD/YYYY'));

1 row created.

SQL> insert into d1 values (to_date('08/18/1920', 'MM/DD/YYYY'));

1 row created.

SQL> select * from d1;


SQL> select table_name, partition_name from user_tab_partitions where table_name = 'D1';

------------------------------ ------------------------------
D1 P1
D1 SYS_P62
D1 SYS_P63

But when I look at the partition_count in user_part_tables...

SQL> select table_name, partition_count from user_PART_TABLES where table_name = 'D1';

------------------------------ ---------------
D1 1048575

That's a whole lot of partitions! Clearly that is the maximum possible partitions. It's odd that the developers at Oracle chose to store that value there rather than the actual count of partitions created. They obviously have it available. Ah, the mysteries of the Oracle.

Traditional vs OLAP

Michael Armstrong-Smith - Fri, 2010-05-21 23:04
I have been following a very interesting thread on LinkedIn in the group called Data Warehouse & Business Intelligence Architects. The thread is discussing the pros and cons of OLAP as compared to more traditional methods of modeling. Personally I love these discussions. Here's what I recently said:

For me, probably an oldie in terms of these discussions, I have been working with modeling and data warehouses coming up on 25 years. I find it very, very strange that for some reason the term OLAP gets pushed around as if it is the answer to everything. This is probably being unfair to the technique because it's actually been around in one form or another a lot longer than most people realise.

Long before the term was invented or, more to the point shall we say, the technique was discovered, documented and given a formal name, we have been able to model enormous data warehouses with enormous amounts of data. Databases with terabytes of data are not new.

If I'm following the thread correctly I see two schools of thought, one pushing OLAP as the bees' knees and one pushing relational modeling. As someone who entered this field not too many years after Dr. Edgar Codd was first touting his ideas to IBM I can tell you that if a relational model is done correct with the right partitions, indexes and joins I can design a data warehouse using traditional methods for far less money than most folks would have you believe it should cost.

I'm somewhat of a historian and I actually have in my possession a set of Dr. Codd's early drafts. It makes for fascinating reading. So to anyone who is not sown on the idea yet I would urge you to read one of the many good books on the subject. You can do no worse than start with one of Ralph Kimball's books but you might also want to look at Bill Inmon.

Personally, I don't adhere strictly to any of the father's of data warehousing. I have read them all and I mix and match as the situation arises replete with a little tangential leap from time to time, sometimes of faith but mostly based on experience. Oh yes, and occasionally I mix them all, you know, just for fun because, after all, this is a beautiful world and we are in a beautiful profession and we have beautiful problems to solve.

So, what do you think? Are you a purist, a traditionalist or a modernist, somewhere in between or an amalgum of all three?

Should we ban anonymity on the Internet?

Peter O'Brien - Fri, 2010-05-21 09:46
In an Information Security article a few months back, Bruce Schneier (author of Schneier on Security) and Marcus Ranum put some points forward for and against internet anonymity. I have to admit that I agree with Schneier and find Ranum's argument quite weak. He appears to suggest that the main reason to enforce identity is to avoid spam. The tools aren't great, but there are already mechanisms in place to address this. Criminals are always getting better at finding ways to exploit weaknesses in internet technologies increasingly at the heart of the way we shop, interact, work, entertain and inform ourselves. We just have to keep up with the pace in the cat and mouse game. Sacrificing anonymity, and the right to privacy, is too great a cost for just avoiding emails about Viagra (tm) and Nigerian generals with a stash of cash to move out of the country.

What is the great danger of not being anonymous? Well it's all the inferring that goes on about facts that get gathered around the things you search for, shop for, chat about, view and listen to. These are then used to categorise you for advertising, inclusion or exclusion from groups or activities. NetFlix provided a great example of this last year. Just weeks after the contest began, two University of Texas researchers showed that with the NetFlix data one could identify users and in some cases their political leanings and sexual orientation.

Getting back to Schneier's point, trying to implement a robust identification system, which criminals can not outwit or take advantage of, is not possible...
Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you'll never truly know where a packet came from. Work on the problems you can solve: software that's secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we're doing, and they'll do more to improve security than trying to fix insoluble problems.

Tech M&A deals of 2010

Vikas Jain - Wed, 2010-05-19 18:51
Here's some notable tech M&A activity happened till May, 2010.

In Security space,
  • Oracle IdM adding identity analytics (OIA) to it's portfolio through the broader Sun acquisition
  • Symantec enhancing encryption portfolio with PGP, GuardianEdge, and vulnerability assessment offering through Gideon Technologies
  • EMC's RSA Security Division acquired Archer Technologies for GRC across physical+virtual infrastructures
  • Trustwave acquired Intellitactics for SIEM to enhance PCI compliance offering, and BitArmor to enhance endpoint security offering
In Cloud computing space,
  • VmWare seems to be building up Cloud PaaS platform acquiring Spring Source (in 2009) , and now Zimbra, and Rabbit Technologies
  • CA acquired Nimsoft and 3Tera to manage cloud environments
  • Cisco acquired Rohati Systems for cloud security in Cisco's Nexus switch line
In Mobile space,
  • SAP planning to buy Sybase for it's mobile middleware
  • Apple getting Siri, HP getting Palm, RIM getting Viigo
Network World slideshow on Tech acquisitions of 2010
PWC report on Tech M&A insights for 2010

How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

Alejandro Vargas - Wed, 2010-05-19 05:31

The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document.

In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator