There was a discrepancy in the failgroups of couple of ASM disks in Exadata. In Exadata, the cell name corresponds to the failgroup name. But there were couple of disks with different failgroup names. Using the following plan to rectify the issue online without any downtime:
1) Check disks and their failgroup:
col name format a27
col path format a45
SQL> select path,failgroup,mount_status,mode_status,header_status,state from v$asm_disk order by failgroup, path;
o/100.100.00.000/DBFSDG_CD_09_mycellnet0 mycellNET0 CACHED ONLINE MEMBER NORMAL
o/100.100.00.000/DATA_CD_08_mycellnet0 mycell_NET0 CACHED ONLINE MEMBER NORMAL
2) Drop Disks:
ALTER DISKGROUP DATA DROP DISK DATA_CD_08_mycellnet0 REBALANCE POWER 32;
3) Wait for rebalanacing to finish
select * from gv$asm_operation;
4) Add the disks to the correct failgroups
ALTER DISKGROUP DATA ADD failgroup mycellNET0 DISK ‘o/100.100.00.000/DATA_CD_08_mycellnet0′ rebalance power 32;
– Wait for rebalance to complete.
5) select * from v$asm_operation;
6) Verify the incorrect failgroup has gone
select name,path,failgroup from v$asm_disk where failgroup in (‘mycell_NET0′) order by name;
select path,failgroup,mount_status,mode_status,header_status,state from v$asm_disk order by failgroup, path;
One of the well-known best practices for HDFS is to store data in few large files, rather than a large number of small ones. There are a few problems related to using many small files but the ultimate HDFS killer is that the memory consumption on the name node is proportional to the number of files stored in the cluster and it doesn’t scale well when that number increases rapidly.
MapR has its own implementation of the Hadoop filesystem (called MapR-FS) and one of its claims to fame is to scale and work well with small files. In practice, though, there are a few things you should do to ensure that the performance of your map-reduce jobs does not degrade when they are dealing with too many small files, and I’d like to cover some of those.The problem
I stumbled upon this when investigating the performance of a job in production that was taking several hours to run on a 40-node cluster. The cluster had spare capacity but the job was progressing very slowly and using only 3 of the 40 available nodes.
When I looked into the data that was being processed by the active mappers, I noticed that vast majority of the splits being read by the mappers were in blocks that were replicated into the same 3 cluster nodes. There was a significant data distribution skew towards those 3 nodes and since the map-reduce tasks prefer to execute on nodes where the data is local, the rest of the cluster sat idle while those 3 nodes were IO bound and processing heavily.MapR-FS architecture
Differently from HDFS, MapR-FS doesn’t have name nodes. The file metadata is distributed across different data nodes instead. This is the key for getting rid of the name node memory limitation of HDFS, and let MapR-FS handle a lot more files, small or large, than a HDFS cluster.
Files in MapR-FS have, by default, blocks of 256MB. Blocks are organised in logical structures called “containers”. When a new block is created it is automatically assigned to one existing container within the volume that contains that file. The container determines the replication factor (3 by default) and the nodes where the replicas will be physically stored. Containers are bound to a MapR volume and cannot span multiple volumes.
There’s also a special container in MapR-FS called a “name container”, which is where the volume namespace and file chunk locations are stored. Besides the metadata, the name container always stores the first 64KB of the file’s data.
Also, there’s only a single name container per MaprFS volume. So the metadata for all the files in a volume, along with the files’ first 64KB of data, will be all stored in the same name container. The larger the number of files in a volume, the more data this container will be replicating across the same 3 cluster nodes (by default).
So, if your data set is comprised of a very large number of small files (with sizes around 64KB or less) and is all in the sae volume, most of the data will be stored in the same 3 cluster nodes, regardless of the cluster size. Even if you had a very large cluster, whenever you ran a map-reduce job to process those files, the job’s tasks would be pretty much allocated on only 3 nodes of the cluster due to data locality. Those 3 nodes would be under heavy load while the rest of the cluster would sit idle.Real impact
To give you an idea of the dimension of this problem, the first time I noticed this in production was due to a Hive query that was causing high load only in 3 nodes of 40-node cluster. The job took 5 hours to complete. When I looked into the problem I found that the table used by the Hive query had tens of thousands of very small files, many of them smaller than 64K, due to the way the data was being ingested.
We coalesced the table to combine all those small files into a much smaller number of bigger ones. The job ran again after that, without any changes, and completed in just 15 minutes!! To be completely fair, we also changed the table’s file format from SequenceFile to RCFile at the same time we coalesced the data, which certainly brought some additional performance improvements. But, from the 3-node contention I saw during the first job run, I’m fairly convinced that the main issue in this case was the data distribution skew due to the large amount of small files.Best practices
This kind of problem is mitigated when large files are used, since only a small fraction of the data (everything below the 64KB mark) will be stored in the name container, with the rest distributed across other containers and, therefore, other nodes. We’ll also have a smaller number of files (for a similar data volume), which reduces the problem even more.
If your data is ingested in a way that creates many small files, plan to coalesce those files into larger ones on a regular basis. One good tool for that is Edward Capriolo’s File Crusher. This is also (and especially) applicable to HDFS.
Best practice #1: Keep you data stored into large files. Pay special attention to incremental ingestion pipelines, which may create many small files, and coalesce them on a regular basis.
A quick and dirty workaround for the 3-node contention issue explained above would be to increase the replication factor for the name container. This would allow more nodes to run map-reduce tasks on that data. However, it would also use a lot more disk space just to achieve the additional data locality across a larger number of nodes. This is NOT an approach I would recommend to solve this particular problem.
Instead, the proper way to solve this in Mapr-FS is to split your data across different volumes. Especially if you’re dealing with a large number of small files that cannot be coalesced, splitting them across multiple volumes will keep the number of files per volume (and per name container) under control and it will also spread the small files’ data evenly across the cluster, since each volume will have its own name container, replicate across a different set of nodes.
The volumes may, or may not, follow your data lifecycle, with monthly, weekly or even daily volumes, depending on the amount of data being ingested and files being created.
Best practice #2: Use Mapr-FS volumes to plan your data distribution and keep the number of files per volume under control.References:
Yesterday, Cloudera released the score reports for their Data Science Challenge 2014 and I was really ecstatic when I received mine with a “PASS” score! This was a real challenge for me and I had to put a LOT of effort into it, but it paid off in the end!
Note: I won’t bother you in this blog post with the technical details of my submission. This is just an account of how I managed to accomplish it. If you want the technical details, you can look here.
I first learned about the challenge last year, when Cloudera ran it for the first time. I was intrigued, but after reading more about it I realised I didn’t have what it would be required to complete the task successfully.
At the time I was already delving into the Hadoop world, even though I was still happily working as an Oracle DBA at Pythian. I had studied the basics and the not-so-basics of Hadoop, and the associated fauna and had just passed my first Hadoop certifications (CCDH and CCAH). However, there was (and is) still so much to learn! I knew that to take the challenge I would have to invest a lot more time into my studies.
“Data Science” was still a fuzzy buzzword for me. It still is, but at the time, I had no idea about what was behind it. I remember reading this blog post about how to become a data scientist. A quick look at the map in that post turned me off: apart from the “Fundamentals” track in it, I had barely idea what the rest of the map was about! There was a lot of work to do to get there.There’s no free lunch
But as I started reading more about Data Science, I started to realise how exciting it was and how interesting were the problems it could help tackle. By now I had already put my DBA career on hold and joined the Big Data team. I felt a huge gap between my expertise as a DBA and my skills as a Big Data engineer, so I put a lot of effort in studying the cool things I wanted to know more about.
The online courses at Coursera, Edx, Stanford and the like were a huge help and soon I started wading through courses and courses, sometime many at once: Scala, R, Python, more Scala, data analysis, machine learning, and more machine learning, etc… That was not easy and it was a steep learning curve for me. The more I read and studied I realised there was many times more to learn. And there still is…The Medicare challenge
But when Cloudera announced the 2014 Challenge, early this year, I read the disclaimer and realised that this time I could understand it! Even though I had just scratched the surface of what Data Science is meant to encompass, I actually had tools to attempt tackling the challenge.
“Studies shall not stop!!!”, I soon found, as I had a lot more to learn to first pass the written exam (DS-200) and then tackle the problem proposed by the challenge: to detect fraudulent claims in the US Medicare system. It was a large undertaking but I took it one step at a time, and eventually managed to complete a coherent and comprehensive abstract to submit to Cloudera, which, as I gladly found yesterday, was good enough to give me a passing score and the “CCP: Data Scientist” certification from Cloudera!I’m a (Big Data) Engineer
What’s next now? I have only one answer: Keep studying. There’s so much cool stuff to learn. From statistics (yes, statistics!) to machine learning, there’s still a lot I want to know about and that keeps driving me forward. I’m not turning into a Data Scientist, at least not for a while. I am an Engineer at heart; I like to fix and break things at work and Data Science is one more of those tools I want to have to make my job more interesting. But I want to know more about it and learn how to use it properly, at least to avoid my Data Scientist friends cringing away every time I tell tell I’m going to run an online logistic regression!
The idea of this blog post is to describe what the delayed durability feature is in SQL Server 2014 and to describe a use case from an application development perspective.
With every new SQL Server release we get a bunch of new features and delayed durability of transactions really caught my attention. Most of the relational database engines are used to handle transactions with the write ahead log method(http://en.wikipedia.org/wiki/Write-ahead_logging), basically a transaction comes into the database, and in order to successfully commit a piece of information it will flush the pages from the memory, then write to the transaction log and finally to the datafile, always following a synchronous order, since the transaction log is pretty much a log of each transactions, recovery methods can even try to get the data from logs in case the data pages were never committed to the datafile, so as a summary this is a data protection method used to handle transactions, MSDN calls this a transaction with FULL DURABILITY.
So what is Delayed Transaction Durability?
To accomplish delayed durability in a transaction, asynchronous log writes happens from the buffers to the disk. Information is kept in memory until either the buffer is full or a flush takes place. This means instead of flushing from memory, then to log and then to datafile, the data will just wait in memory and the control of the transaction will be restored to the requestor app faster. If a transaction initially only hits memory and avoid going through the disk heads, it will for sure complete faster as well.
But when is the data really stored in disk?
SQL Server will handle this depending on how busy/full the memory is and will then execute asynchronous transactions to finally store the information in disk. You can always force this to happen with this stored procedure “sp_flush_log”.
Ok But there is a risk, right?
Correct, since the original data protection method is basically skipped, in the event of a system disruption such as SQL Server doing a failover or simply “unexpectedly shutting down”, some data can be lost in the so called LIMBO that is somewhere between the application pool and the network cable.
Why would I want to use this?
Microsoft recommends to use this feature only if you can tolerate a data loss, if you are experiencing a bottleneck or performance issue related to log writes or if your workload have a high contention rate(processes waiting for locks to be released.)
How do I Implement it?
To use delayed transactions you should enable this as a database property. You can used FORCED option which will try to handle all transactions as delayed durable, you can use ALLOWED which will let you use delayed durable transactions, which you then need to specify in your TSQL(this is called atomic block level control), see a sample taken from MSDN below:
CREATE PROCEDURE …
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
DELAYED_DURABILITY = ON,
TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'English'
For more syntax and details I invite you to check the so full of wisdom MSDN Library.
Enough of the background information, and let’s take this puppy for a ride, shall we?
Consider the following scenario: You manage a huge application, probably some application between an ERP and a Finance module. The company has developed this application from scratch, each year more and more features are added in this app. The company decides that they want to standardize procedures and want to have more control over the events of the application. They realize they do not have enough audit traces, if someone deletes data, if a new deal or customer information is inserted, management needs to have a track record of almost anything that happens. They have some level of logging, but is implemented differently depending on the developer taste and mood.
So, Mr MS Architect decides they will implement enterprise library logging block, and will handle both exceptions and custom logging with this tool. After adding all this logging to the events, the system begins to misbehave and the usual slow is now officially really slow. Mr Consultant then comes in and suggest that the logging data is moved to a separate database, also this database should use Delayed durability, by doing so, transactions related to logging events will have less contention and will return the control faster to the application, some level of data loss can be tolerated which also makes the decision even better.
Let’s build a proof of concept and test it..
You can find a sample project attached: WebFinalNew
You need to have enterprise library installed in your visual studio. For this sample I am using Visual Studio 2010.
You need to create 2 databases, DelayedDB and NormalDB (Of Course we need to use SQL Server 2014)
Use the attached script LoggingDatabase (which is part of the scripts of Enterprise library), it will create all the objects needed for the application log block.
In the DelayedDB, edit the properties and set the Delayed Durability to FORCED, this will make all transactions to have delayed durability(please note some transactions will never be delayed durable such as system transactions, cross-database transactions, and operations involving FileTable, Change Tracking and Change Data Capture)
You need to create a windows web project, it should have a web.config , if not you can manually add a configuration file:
Make sure you add all the application block references(Logging Block)
Now right click over the web.config or app.config file and edit your enterprise library configuration
In the database Settings block, add 2 new connections to your database(one for NormalDB and the other for DelayedDB), make sure to specify the connection in the form of a connection string like the picture below:
In the Logging block, create a new category called DelayedLogging, this will point to the database with delayed durability enabled.
Then add 2 database Trace listeners, configure General Category to point to “Database Trace Listener” and then configure DelayedLogging Category to point to “Database Trace Listener 2”. Configure each listener to point to the corresponding database(one to each database previously configured in the Database block)
Save all changes and go back to the application, configure the design layout with something like below
Add a codebehind to the button in the upper screen and build a code that will iterate and send X amount of commands to each database, track the time it takes to send the transaction and regain control of the application into a variable, check the attached project for more details, but use logwriter.write and pass as parameter the category you configured to connect to DelayedDB(
DelayedLogging) and the general category(default, no parameter) to connect to NormalDB. See a sample of how a logging transaction is fired below:
logWriter.Write("This is a delayed transaction","DelayedLogging");
logWriter.Write("This is a transaction");
This code will call the logging block and execute a transaction on each database, the “normal” database and the durable one, it will also track milliseconds it takes to return the control to the application, additionally I will have performance monitor and query statistics from the database engine to see the difference in behavior.
What information we have from sys.dm_io_virtual_file_stats?Database io_stall_read_ms num_of_writes num_of_bytes_written io_stall_write_ms io_stall size_on_disk_bytes DelayedDB 47 5126 13843456 4960 5007 1048576 Normal 87 5394 14492160 2661 2748 1048576
We can see that the same amount of data was sent to both databases(last column size_on_disk_bytes), interesting observation are the stalls, in a delayed durable database the stall will be higher for writing, this means despite the fact that the transaction is executed “faster”, what really means is that it returns the control to the application faster, but the time it takes to actually store the data to disk can be higher since is done in async mode.
Let’s see a quick graphic of the performance impact
With a Delayed Durability the disk queue length average is higher, since it will wait to fill the buffer and then execute the write. You can appreciate the yellow peak(within the red circle) after the transaction completes, it will execute pending writes( moment where I issue a “sp_flush_log”.).
With a Full Durability the disk queue length average is lower, since it will sequentially execute the writes there will be less pending transactions in memory.
Delayed Durability feature is definitely a great addition to your DBA toolbelt, it needs to be used taking in consideration all the risks involved, but if properly tested and implemented it can definitely improve the performance and architecture of certain applications. Is important to understand this is not a turbo button(like some people does with the nolock hint) and it should be used for certain types of transactions and tables. Will this change your design methods and make you plan for a separate delayed durable database? or plan to implement certain modules with delayed durable transactions? This for sure will have an interesting impact on software design and architecture.
If you go to the forum and search for example for "APEX" or "Application Exp", you will see no results. Typing in "Application Ex" will find "Application Express".
Each of the found links will have a funny description saying:
"An error occurred processing your request. If this problem persists, please contact the webmaster or administrator of this site."
:) So, it seems there are now even more bugs than before.
Probably, the intention to change the forum wasn't bad. However, once you manage to open it you will see a lot of information you don't need (or at least not all of the time). The real content is somewhere underneath and needs scrolling like in Facebook (oh, how I hate that site). And the worst thing is that you can see only ten threads per page - if you want to see more then click and scroll again. For those interested in helping others this is making things much more complicated.
One positive thing though. :) My name suddenly appears in the top list of the participants in the forum. The list isn't reduced to the top five but it now shows the top six. Top six is obviously the new top five. ;)
By Ty Duval, Consulting Senior Practice Director, WebCenter, Oracle Consulting Services
At the Crossroads
I frequently encounter companies at the crossroads in their efforts to become digital businesses. Their journeys proceed along familiar paths and I can readily anticipate what their next steps should be. To begin with, these firms launched their initial web sites more than 15 years ago, and have steadily added multiple web-based applications (running on disparate systems) to support targeted initiatives. IT and business leaders are certainly web-aware, if not already web-savvy.
Yet a lot has changed over the past decade. Web-powered solutions are no longer nice-to-have additions to enterprise architectures and applications. Rather, these solutions are core capabilities for achieving strategic business objectives.
The Business Value for WebCenter
IT leaders must now provide both internal and external customers with the branded experiences for managing and using online content, while sharply reducing costs and accelerating time to market. It’s necessary -- but no longer sufficient -- to simply consolidate web sites by introducing standardized platforms and services that reduce technical footprints.
Instead, IT groups need to refresh, modernize, and mobilize their enterprise application infrastructures. There is also an evolution of responsibilities. Individual business units, not the IT groups, should create and manage all of the content required for engaging customers and driving the branded experiences across their organizations.
Of course, Oracle WebCenter provides the tooling for delivering effective enterprise-scale applications. Yet implementation makes a big difference. At OCS, we focus on three factors for deploying digital business solutions – consultative engagement, content inventory, and content reuse. Let me explain why these factors make a difference.
First, the OCS engagement model is a consultative process. We work along side business stakeholders and creative teams to define the requirements for building branded experiences. With our deep technical knowledge and product expertise, we can help define how to use the right tool for the right job in the right way.
There is often a gap between what the business envisions and what the tools deliver. By being part of the conversation from the start, OCS consultants can bridge the gap, and make timely recommendations that leverage the key capabilities of the enabling tools and technologies. Then, when it comes to implementation, consultants can rapidly prototype and produce frequent enhancements on an ongoing basis. Utilizing an agile development methodology, they can work closely with business users and designers to mold the digital environment.
Second, branded experiences depend on content. In any engagement, it’s essential to determine what information already exists and can be readily incorporated into the new solution, as well as what content is entirely missing and needs to be created. A content inventory maps the “to be” state about what information customers require, against the “as is” condition describing and categorizing all the content items that are currently available.
OCS consultants work with business stakeholders and creative teams to identify the kinds of content needed to support particular experiences. It is also important to identify the content owners who are responsible for producing the needed information, both currently and in the future. Often the content already exists in one repository or another. The design challenge then is to compile and organize the information from disparate sources.
The content inventory can also uncover the missing text, images, and rich media assets that customers expect as part of their experiences. OCS consultants can then work with line-of-business organizations to define new content management processes – the people, tasks, and activities required for creating and maintaining these needed information sources. Once deployed, the line organizations should be responsible for managing the content without IT support.
Third, a successful digital business initiative depends on content reuse – the ability to create content items once, manage them systematically, and distribute them as needed across the enterprise. As an example, there should be a single source of content that describes the capabilities of a new product on a company’s web site, and the corresponding promotions contained in personalized email messages sent to prospective customers.
When it comes to building branded experiences, more is at stake then storing content within a shared repository or relying on a predefined set of editorial workflows for review and approvals. Reuse requires an appreciation for the power of content and an understanding about how to manage it for competitive advantage.
This is where WebCenter deployment expertise pays off. OCS consultants have the technical skill sets and business insights for defining the content models and metadata essential to ensure content reuse. They can utilize the appropriate capabilities of various WebCenter products for business results.
Knowhow and Experience
In short, there’s an art and a science to building branded experiences for digital businesses. Successful companies are going to transform – and digitize – key aspects of their ongoing operations, and create new business processes along the way. Different firms and even entire industries are going to pursue their own particular paths.
But there are common threads to weaving together the applications for next-generation, digitally empowered environments. It takes knowhow and experience. When implementing WebCenter, OCS consultants have the insights, methodologies, and tools to help companies make the journeys and become digital businesses.
SQL Server database backup & restore from On-Premise to Azure is a feature introduced with SQL Server 2012 SP1 CU2. In the past, it could be used with these three tools:
- Transact-SQL (T-SQL)
- SQL Server Management Objects (SMO)
With SQL Server 2014, backup & restore can also be enabled via SQL Server Management Studio (SSMS).
A significant fraction of IT professional services industry revenue comes from data integration. But as a software business, data integration has been more problematic. Informatica, the largest independent data integration software vendor, does $1 billion in revenue. INFA’s enterprise value (market capitalization after adjusting for cash and debt) is $3 billion, which puts it way short of other category leaders such as VMware, and even sits behind Tableau.* When I talk with data integration startups, I ask questions such as “What fraction of Informatica’s revenue are you shooting for?” and, as a follow-up, “Why would that be grounds for excitement?”
*If you believe that Splunk is a data integration company, that changes these observations only a little.
On the other hand, several successful software categories have, at particular points in their history, been focused on data integration. One of the major benefits of 1990s business intelligence was “Combines data from multiple sources on the same screen” and, in some cases, even “Joins data from multiple sources in a single view”. The last few years before application servers were commoditized, data integration was one of their chief benefits. Data warehousing and Hadoop both of course have a “collect all your data in one place” part to their stories — which I call data mustering — and Hadoop is a data transformation tool as well.
And it’s not as if successful data integration companies have no value. IBM bought a few EAI (Enterprise Application Integration) companies, plus top Informatica competitor Ascential, plus Cast Iron Systems. DataDirect (I mean the ODBC/JDBC guys, not the storage ones) has been a decent little business through various name changes and ownerships (independent under a couple of names, then Intersolv/Merant, then independent again, then Progress Software). Master data management (MDM) and data cleaning have had some passable exits. Talend raised $40 million last December, which is a nice accomplishment if you’re French.
I can explain much of this in seven words: Data integration is both important and fragmented. The “important” part is self-evident; I gave examples of “fragmented” a couple years back. Beyond that, I’d say:
- A new class of “engine” can be a nice business — consider for example Informatica/Ascential/Ab Initio, or the MDM players (who sold out to bigger ETL companies), or Splunk. Indeed, much early Hadoop adoption was for its capabilities as a data transformation engine.
- Data transformation is a better business to enter than data movement. Differentiated value in data movement comes in areas such as performance, reliability and maturity, where established players have major advantages. But differentiated value in data transformation can come from “intelligence”, which is easier to excel in as a start-up.
- “Transparent connectivity” is a tough business. It is hard to offer true transparency, with minimal performance overhead, among enough different systems for anybody to much care. And without that you’re probably offering a low-value/niche capability. Migration aids are not an exception; the value in those is captured by the vendor of what’s being migrated to, not by the vendor who actually does the transparent translation. Indeed …
- … I can’t think of a single case in which migration support was a big software business. (Services are a whole other story.) Perhaps Cast Iron Systems came closest, but I’m not sure I’d categorize it as either “migration support” or “big”.
And I’ll stop there, because I’m not as conversant with some of the new “smart data transformation” companies as I’d like to be.
- DBMS transparency layers never seem to sell well (April, 2009)
- ClearStory’s approach to data integration (September, 2013)
- Judging opportunities (July, 2014)
One of the criticisms of MacIntyre is that his critique of rational ethics is, on the one hand, devastating; on the other hand, his positive case for working out a defense of his own position - a revivification of social ethics in the Aristotelian-Thomist tradition(s) was somewhat pro forma. I think this is legitimate in so far as it relates to After Virtue itself (I believe I have read the latest edition - 3 - most recently), though I am not enough of a MacIntyre expert to offer a defensible critique of his work overall.
I do, however, want to draw attention to Dependent Rational Animals specifically in this light. Here MacIntyre begins with is the position of human as animal - as a kind of naturalist starting point for developing another pass at the importance of the tradition of the virtues. What is most remarkable is that in the process of exploring the implications of our "animality" MacIntyre manages to subvert yet another trajectory of twentieth century philosophy, this time as it relates to the primacy of linguistics. The net effect is to restore philosophical discourse back toward the reality of the human condition in the context of the broader evolutionary context of life on earth without - and this I must say is the most amazing part of this book - resorting to fables-masked-as-science (evolutionary psychology).
It would be deeply unfair of me to mock Blackboard for having a messy but substantive keynote presentation and not give equal time to D2L’s remarkable press release, pithily entitled “D2L Supercharges Its Integrated Learning Platform With Adaptive Learning, Robust Analytics, Game-Based Learning, Windows® 8 Mobile Capabilities, And The Newest Education Content All Delivered In The Cloud.” Here’s the first sentence:
D2L, the EdTech company that created the world’s first truly integrated learning platform (ILP), today announces it is supercharging its ILP by providing groundbreaking new features and partnerships designed to personalize education and eliminate the achievement gap.
I was going to follow that quote with a cutting remark, but really, I’m not sure that I have anything to say that would be equal to the occasion. The sentence speaks for itself.
For a variety of reasons, Phil and I did not attend D2L FUSION this year, so it’s hard to tell from afar whether there is more going on at the company than meets the eye. I’ll do my best to break down what we’re seeing in this post, but it won’t have the same level of confidence that we have in our Blackboard analysis.
Let me get to the heart of the matter first. Does it look to us like D2L has made important announcements this year? No, it does not. Other than, you know, supercharging its ILP by providing groundbreaking new features and partnerships designed to personalize education and eliminate the achievement gap. They changed their product name to “Brightspace” and shortened their company name to D2L. The latter strikes me as a particularly canny PR move. If they are going to continue writing press releases like their last one, it is probably wise to remove the temptation of the endless variety of potential “Desire2″ jokes. Anyway, THE Journal probably does the best job of summarizing the announcements. For an on-the-ground account of the conference and broader observations about shifts in the company’s culture, read D’Arcy Norman’s post. I’ve been following D’Arcy since I got into blogging ten years ago and have learned to trust his judgment as a level-headed on-the-ground observer.
From a distance, a couple of things jump out at me. First, it looks to me like D2L is trying to become a kind of a content player. Having acquired the adaptive platform in Knowillage, they are combining it with the standards database that they acquired with the Achievement Standards Network. They are also making a lot of noise about enhancements to and content partnerships for their Binder product, which is essentially an eBook platform. Put all of this together, and you get something that conceptually is starting to look (very) vaguely like CogBooks. It wants to be an adaptive courseware container. If D2L pulls this off it will be significant, but I don’t see signs that they have a coherent platform yet—again, acknowledging that I wasn’t able to look at the strategy up close at FUSION this year and could easily be missing critical details.
Second, their announcement that they are incorporating IBM’s Cognos into their Insights learning analytics platform does not strike me as a good sign for Insights. As far as we have been able to tell from our sources, that product has languished since Al Essa left the company for McGraw Hill. One problem has been that their technical team was unable to deliver on the promise of the product vision. There were both data integrity and performance issues. This next bit is speculation on my part, but the fact that D2L is announcing that they plan to use the Cognos engine suggests to me that the company has thus far failed to solve those problems and now is going to a third party to solve them. That’s not necessarily a bad strategy, but it reinforces our impression that they’ve lost another year on a product that they hyped to the heavens and raises questions about the quality of their technical leadership.
This was just a proof-of-concept, not something I intend to actually leave running.
EPG on Port 8080
I do other testing on the home network too, so I already had my router configured to forward port 80 to another environment. That meant the router's web admin had been shifted to port 8080, and it wouldn't let me use that. Yes, I should find a open source firmware, but OpenWRT says it is unsupported and will "brick the router" and I can't see anything for Tomato.
So I figured I'd just use any incoming router port and forward it to the PC's 8080. I chose 6000. This was not a good choice. Looks like Chrome comes with a list of ports which it thinks shouldn't be talking http. 6000 is one of them, since it is supposed to be used for X11 traffic so Chrome told me it was unsafe and refused to co-operate.
Since it is a black-list of ports to avoid, I just happened to be unlucky (or stupid) in picking a bad one. Once I selected another, I got past that issue.
My task list was:
- Install Oracle XE 11gR2 (Windows 64-bit)
- Configure the EPG for Apex. I ran apex_epg_config.sql as, I had switched straight from the pre-installed Apex 4.0 to 4.2.5 rather than upgrading a version I had actively used.
- Unlocked the ANONYMOUS database account
- Checked DBMS_XDB.GETHTTPPORT returned 8080
- Enabled external access by setting DBMS_XDB.SETLISTENERLOCALACCESS(false);
- I got a handy Dynamic DNS via NoIP because my home IP can potentially change (though it is very rare). [Yes, there was a whole mess about Microsoft temporarily hijackinging some noip domains, but I'm not using this for anything important.] This was an option in my router setup.
- The machine that runs XE / Apex should be assigned a specific 192.168.1.nnn IP address by the router (based on it's MAC address). This configuration is specific to the router hardware, so I won't go into my details here. But it is essential for the next step.
- Configure the port forwarding on the router to push incoming traffic on the router's port 8088 off to port 8080 for the IP address of the machine running XE / Apex. This is also router specific.
My next step was to use the Apex Listener rather than the EPG. Oracle have actually retagged the Apex Listener as RDS (Restful Data Services) so that search engines can confuse it with Amazon RDS (Relational Database Service).
This one is relatively easy to set up, especially since I stuck with "standalone" mode for this test.
A colleague had pointed me to this OBE walkthrough on Apex PDF reports via RDS, so I took a spin through that and it all worked seamlessly.
My next step would be a regular web server/container for RDS rather than standalone. I'm tempted to give Jetty a try as the web server and container for the listener rather than Tomcat etc, but the Jetty documentation seems pretty sketchy. I'm used to the thoroughness of the documentation for Apache (as well as Oracle).
In a letter to campus leaders, Cal State University system office last month announced that Cal State Online will no longer operate as originally conceived. Emphasis added below.
As the CSU continues to expand its online education strategies, Cal State Online will evolve as a critical component. An early Cal State Online goal will continue: to increase the quality and quantity of fully online education offerings to existing and prospective CSU students, resulting in successful completion of courses and graduation.
The re-visioning of Cal State Online was recommended by the Council of Presidents and approved by the chancellor. This will include a shift to a communication, consultation and services’ strategy for fully online campus degree programs, credentials, certificates and courses supported by opt-in shared services. Cal State Online’s shared services will be designed, delivered and managed to:
1. Make it easy for prospective and existing students to discover, decide, enroll and successfully complete their CSU online education opportunities.
2. Make it more cost-effective for CSU campuses to develop, deliver and sustain their high- quality fully online degree, credential and certificate programs and courses.Background in a nutshell
In early 2010 a sub-set of the Cal State presidents – the Technology Steering Committee (TSC) – came up with a plan to get the system to aggressively push online education across the system. In fall 2011 the group commissioned a consultant’s set of reports to help them pick an operating model, with the reports delivered in February 2012. This study led to the creation of CSU Online, conceived as a separate 501(c)3 non-profit group1 run by the system, with the plan to use a for-profit Online Service Provider (OSP).2 Early on they realized that Colorado State University was already using the CSU Online name, and the initiative was renamed Cal State Online. The idea was to offer fully-online programs offered by individual campuses in a one-stop shop. Based on an RFP process, in August 2012 Cal State Online selected Pearson as their OSP partner.
Some media coverage of initiative:
- Cal State’s Online Plan, Inside Higher Ed, March 2012
- CSU Announces Partnership with Pearson eCollege on
Cal State Online Initiative, Cal State press release, July 2012
- Cal State Goes Online, Slowly, Inside Higher Ed, August 2012
- Cal State University offers new online program for 2013, SJSU Spartan Daily, September 2012
The March IHE article quoted official Cal State documents to describe the initiative.
“The goal of Cal State Online is to create a standardized, centralized, comprehensive business, marketing and outreach support structure for all aspects of online program delivery for the Cal State University System,” says the draft RFP. In the open letter, the executive director offers assurances that “participation is optional” for each of the system’s nearly two dozen campuses, “all programs participating in Cal State Online are subject to the same approval processes as an on-campus program,” and “online courses will meet or exceed the quality standards of CSU face-to-face courses.”What has changed?
This change is significant and recent, meaning that Cal State likely does not have full plans on what will happen in the future. For now:
- Cal State Online will no longer be a separate operating entity, and the remnant, or “re-visioned” services will be run by the existing Academic Technology Services department within the Chancellor’s Office.
The re-visioning Cal State Online team will be led by Gerry Hanley (Assistant Vice Chancellor for Academic Technology Services) with Sheila Thomas (State University Dean, Extended and Continuing Education).
- Pearson is no longer the OSP, and in fact, they had already changed their role many months ago3 to remove the on-site team and become more of a platform provider for the LearningStudio (aka eCollege) LMS and supporting services.
- Cal State is no longer attempting to provide a centralized, comprehensive support structure “for all aspects of online program delivery” but instead will centrally provide select services through the individual campuses.
- It is clear that Cal State is positioning this decision to show as much continuity as possible. They will continue to provide some of the services started under Cal State Online and will continue to support the programs that have already been offered through the group.
Some services will continue and CSU may keep the name, but it’s the end of Cal State Online as we know it.
I am working on a longer post to explain what happened, including (hopefully) some interviews for supporting information . . . stay tuned.
Update: Changed description of Pearson change and added footnote.
- I have not independently verified that the organization truly was set up as a 501(c)3.
- Pearson had a team in place at Cal State providing LMS, implementation and integration services, enrollment management & marketing, course design support, analytics and reporting, learning object repository, help desk and technical support, training and faculty support.
- I believe this occurred Feb 2014 but am not sure.
The post It’s The End of Cal State Online As We Know It . . . appeared first on e-Literate.
It looks like the site maintenance is complete and from my perspective the DNS changes have gone through.
If you go to the homepage and see a message called “Site Maintenance” in the “Site News” section, it means you are being directed to the new server. If you don’t see that it means you are still being directed to the old server and you won’t be able to read this.
I guess it will take a few hours for the DNS changes to propagate. Last time I moved the site it took a couple of days to complete for everyone.
Tim…Site Maintenance Complete! was first posted on July 19, 2014 at 11:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
In contrast to admin-managed databases, in case of policy managed databases, there is no predefined mapping of an instance to a node. Hence any instance can run on any node. In case we need to connect to a specific instance using OS authentication, we need to
- find out the node where the instance is runnin
- set ORACLE_SID to the instance name
- Connect to the instance locally.
Now this problem can be resolved by mapping the instances to specific nodes.
Here is the demonstration :
– check that there is no mapping of instance names to hostnames
[oracle@host01 ~]$ srvctl status database -d cdb1 Instance cdb1_1 is running on node host01 Instance cdb1_2 is running on node host03 Instance cdb1_3 is running on node host02
– Configure instance cdb1_2 to run on host02 only
[oracle@host01 ~]$ srvctl modify instance -db cdb1 -instance cdb1_2 -node host02
– check that instance cdb1_2 has relocated to host02
– The srvctl command reports the following :
- host01 is hosting cdb1_1 as earlier
- host02 is hosting 2 instances – cdb1_2 ( relocated), and cdb1_3 ( earlier)
- host03 which was hosting cdb1_2 does not host any instance presently
[oracle@host01 ~]$ srvctl status database -d cdb1 Instance cdb1_1 is running on node host01 Instance cdb1_2 is running on node host02 Instance cdb1_3 is running on node host02 Database cdb1 is not running on node host03
– Let’s verify if instance cdb1_2 has already stopped on host03
– Let’s check if service cdb1_2 is no longer registered with listener on host03
– But that is not so : cdb1_2 is still registered with listener on host03
[oracle@host03 ~]$ lsnrctl stat ... =(PROTOCOL=tcp)(HOST=188.8.131.52)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM2", status READY, has 2 handler(s) for this service... Service "cdb1" has 1 instance(s). Instance "cdb1_2", status READY, has 1 handler(s) for this service... Service "cdb1XDB" has 1 instance(s). Instance "cdb1_2", status READY, has 1 handler(s) for this service... Service "pdb1" has 1 instance(s). Instance "cdb1_2", status READY, has 1 handler(s) for this service... The command completed successfully
– Let’s check if there is any pmon process belonging to cdb1 running on host03
– Well there is still pmon process belonging to cdb1_2 running on host03
[oracle@host03 ~]$ ps -ef |grep pmon oracle 1499 1 0 14:54 ? 00:00:00 ora_pmon_cdb1_2 oracle 2853 1261 0 15:18 pts/1 00:00:00 grep pmon grid 6289 1 0 09:34 ? 00:00:04 asm_pmon_+ASM2
– Let’s try to connect to the instance cdb1_2 on host03 using OS authentication
– I am able to connect to cdb1_2 successfully
[oracle@host03 ~]$ export ORACLE_SID=cdb1_2 [oracle@host03 ~]$ sqlplus / as sysdba SQL> sho parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string cdb1_2
This indicates that output of srvctl command is not reflecting the reality
— NOw let’s verify on host02 also
– Instance cdb1_3 is still registered with listener on host02
[oracle@host02 ~]$ lsnrctl stat ... Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM3", status READY, has 2 handler(s) for this service... Service "cdb1" has 1 instance(s). Instance "cdb1_3", status READY, has 1 handler(s) for this service... Service "cdb1XDB" has 1 instance(s). Instance "cdb1_3", status READY, has 1 handler(s) for this service... Service "pdb1" has 1 instance(s). Instance "cdb1_3", status READY, has 1 handler(s) for this service... The command completed successfully
– pmon process of instance cdb1_3 is still running on host02 as earlier
[oracle@host02 ~]$ ps -ef |grep pmon oracle 13118 1 0 14:59 ? 00:00:00 ora_pmon_cdb1_3 oracle 15576 11818 0 15:23 pts/2 00:00:00 grep pmon grid 16913 1 0 10:07 ? 00:00:04 asm_pmon_+ASM3
– Using OS authentication, I amable to connect to instance cdb1_3 as earlier
[oracle@host02 ~]$ export ORACLE_SID=cdb1_3 [oracle@host02 ~]$ sqlplus / as sysdba SQL> sho parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string cdb1_3
– Let’s try to stop and restart database
– Instance cannot be started on host03
[oracle@host01 ~]$ srvctl stop database -d cdb1 [oracle@host01 ~]$ srvctl start database -d cdb1 PRCR-1079 : Failed to start resource ora.cdb1.db CRS-5017: The resource action "ora.cdb1.db start" encountered the following error: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux-x86_64 Error: 2: No such file or directory Process ID: 0 Session ID: 0 Serial number: 0 . For details refer to "(:CLSN00107:)" in "/u01/app/12.1.0/grid/log/host03/agent/crsd/oraagent_oracle/oraagent_oracle.log". CRS-2674: Start of 'ora.cdb1.db' on 'host03' failed CRS-2632: There are no more servers to try to place resource 'ora.cdb1.db' on that would satisfy its placement policy [oracle@host01 ~]$ srvctl status database -d cdb1 Instance cdb1_1 is running on node host01 Instance cdb1_2 is running on node host02 Database cdb1 is not running on node host03
– May be it is trying to start the same instance as earlier i.e. cdb1_2 on host03
– but since the instance cdb1_2 has already been started on host02, it is failing
– Let’s configure instance cdb1_3 to run on host03 and then attempt to restart the instance on host03 – it works now
[oracle@host01 ~]$ srvctl modify instance -i cdb1_3 -d cdb1 -n host03 [oracle@host01 ~]$ srvctl start instance -i cdb1_3 -d cdb1 [oracle@host01 ~]$ srvctl status database -d cdb1 Instance cdb1_1 is running on node host01 Instance cdb1_2 is running on node host02 Instance cdb1_3 is running on node host03
– Now let’s stop and restart the database once again and check the instance to node mapping
–Now it can be seen that instances cdb1_2 and cdb1_3 are running on the configured hosts only
i.e. host02 and host03 respectively
[oracle@host01 ~]$ srvctl status database -d cdb1 Instance cdb1_1 is running on node host01 Instance cdb1_2 is running on node host02 Instance cdb1_3 is running on node host03
Hence it can be inferred (my understanding) that after assigning instances to different hosts, we need to stop and restart the database for the mapping to actually be effective.
In the meanwhile, output of srvctl command may be misleading.
This mapping makes it very convenient to connect to the desired instance using OS authentication as we don’t need to check the instance currently runing on a host.
I hope this post was useful.
Your comments and suggestions are always welcome!
Comments: 2 comments on this itemYou might be interested in this:
- 12c : PDB cannot share CDB's temporary tablespace
- ORA-12528: TNS:listener: all appropriate instances are blocking new connection
- AUTOMATIC DEGREE OF PARALLELISM (DOP) - PART - II
- 11g R2 RAC: CONVERT NON RAC DATABASE TO RAC DATABASE USING RCONFIG
- ORACLE 11G: PARALLEL STATEMENT QUEUEING
The post 12c RAC: Map Instances Of Policy Managed Database To Nodes appeared first on ORACLE IN ACTION.
This week was both D2L’s FUSION conference and Blackboard’s BbWorld. The conventional wisdom going around is that there was no big news out of either conference. In Blackboard’s case, that’s just not true. In fact, there was an astonishing amount of very significant news. It’s just that Blackboard didn’t do a very good job of explaining it to people. And that, by itself, is also news.
The big corporate keynote had to be one of the strangest I’ve ever seen. CEO Jay Bhatt ran through a whole long list of accomplishments for the year, but he only gave each one a few seconds as he rattled through the checklist. He mentioned that the company has a new mission statement but didn’t bother to explain it. It took nearly an hour of mostly talking about big macro trends in education and generalities about the categories of goals that the company has set before he finally got around to new product announcements. And then commenced what I can only describe as a carpet bombing run of announcements—a series of explosions that were over by the time you realized that they had started, leaving you to wonder what the heck had just happened. Vice President of User Experience Stephanie Weeks gave a 10-minute talk that was mostly platitudes and generalities about goals for students while some truly significant UX work that her team had done played on the video screen in the background, largely unexplained. There was something mentioned about cloud. Collaborate without a Java plugin! A new mobile app. Wait, another new mobile app, but something about jobs. Wait! Go back to the last slide! I think that was…. Is it over already? It seemed like simultaneously the longest and shortest keynote ever.
Phil and I had a chance to talk to Jay about it later in the day and asked him (politely) what he was thinking. He said, “I don’t view BbWorld as a selling conference. At all.”
Wait. What? This is the Blackboard conference, right?
Apparently it was. This executive team is nothing if not earnest about wanting to talk about the real issues in education. In fact, they’re so earnest about it that they’d rather talk about that than sell you their product. As a result, what was announced in Vegas stayed in Vegas. They made a serious mistake with their keynote plan. But as far as serious mistakes go, it was kind of awesome. And revealing. In and of itself, it is a strong indicator that, having begun a major cultural shift under Ray Henderson, the Blackboard of today is under Jay Bhatt is a very different beast than the Blackboard of five or six years ago. Many of your assumptions about what the company is and what you can expect from them probably aren’t safe ones to make anymore.
Anyway, it’s not surprising that people observing the conference from afar (and even from anear) missed the announcements. So what were they?Major UX Overhaul
In the past, a “major UX overhaul” for Blackboard typically meant “we moved around some stuff in the admin panel and put on a skin that looks 5 years out of date rather than 15.” Not this time. The new UX is very different. It takes a lot of design cues from iOS (and, to a certain degree, from Windows Mobile). Forget about the 15 different submenus. They’re moving everything to a single-page model with contextual overlays that fly in when you need them. Workflows have been greatly simplified, and many of them rethought. As I sat in on a demo later in the day, I’m pretty sure that the woman in the row in front of me started crying when she saw how much easier it is to import content from an old course.
To be fair, this isn’t shipping code. “Oh, Michael,” you’re thinking about now, “How can you be such a sucker as to fall for the old vaporware bait and switch?” Well, Phil and I spent some time in their UX lab. We were given access to what was clearly a live system (as was anyone else who came to the UX lab). The UX guy managing the lab gave us a script and warned us that this is still a system in development so if we wanted to see what is actually working today we should stick to the script. But of course, we didn’t. The workflows covered by the script were significant, and a lot that wasn’t on the script was also actually already working. This is real, folks. It may not be done yet, but it’s credible. And if the alpha we saw was any indication, it’s not crazy to imagine that Blackboard could raise the bar on LMS UX design by the time that they release. I kid you not.
Underneath all of this, some serious technical work has been done. Blackboard UX is now 100% separated from the business logic, using Node.js to deliver it and putting presentation code in the browser. Also, the new UX is fully responsive. It dynamically adjusts to the size of the browser window (and device).
Even more impressive was the overhaul of Blackboard Collaborate. The Java plugin is gone.1 It’s been replaced by a simple—dare I say elegant?—WebRTC-based UX. We saw a live demo of it. If Google had designed Hangouts specifically for education, they probably would have built something like what Blackboard is showing off. And it works. We saw it in action.
The UX overhaul would be a pretty significant development all by itself. But it wasn’t all by itself.Blackboard Learn Is Going to the Cloud
Phil and I are still trying to nail down some of the details on this one, particularly since the term “cloud” is used particularly loosely in ed tech. For example, we don’t consider D2L’s virtualization to be a cloud implementation. But from what we can tell so far, it looks like a true elastic, single-instance multi-tenant implementation on top of Amazon Web Services. It’s kind of incredible. And by “kind of incredible,” I mean I have a hard time believing it. Re-engineering a legacy platform to a cloud architecture takes some serious technical mojo, not to mention a lot of pain. If it is true, then the Blackboard technical team has to have been working on this for a long time, laying the groundwork long before Jay and his team arrived. But who cares? If they are able to deliver a true cloud solution while still maintaining managed hosting and self-hosted options, that will be a major technical accomplishment and a significant differentiator.
This seems like the real deal as far as we can tell, but it definitely merits some more investigation and validation. We’ll let you know more as we learn it.Bundled Products
This one may sound like a trivial improvement unless you’ve ever actually dealt with Blackboard’s sales force and trivial to implement unless you’ve ever worked in a big software company with lots of business units, but Blackboard has ended the practice of separately licensing 57 different products, each with its own sales rep and price sheet. In some cases—like xpLOR and myEDU—they’re merging the functionality into the core product. In others, they’re creating tiers of service.
Here’s how their website currently describes the tiers:
- Learning Core: Bb Learn. (But remember, they’re merging previously separate offerings into it.)
- Learning Essentials: Everything in Core plus Collaborate.
- Learning Insight: Everything in Essentials plus Analytics for Learn
- Learning Insight & Student Retention: Everything in Insight plus “retention services.” I didn’t catch this at the conference, but if it’s what it sounds like then the company is beginning to move away from differentiating between products and services and toward integrated solutions.
This should deliver more value to customers with less hassle.Other Stuff
Those were the big announcements, but there was a lot of other stuff that floated by. It seems like they’re doing significant work on their mobile app, separate from the responsive UX work. I didn’t get a chance to even see what that is about. They’re working on a content store in partnership with MBS Books that could be more significant than it looks at a glance. There was some sort of jobs or career mobile app that whizzed by in the keynote. And who knows what else.
When I take a step back and look at this as a whole, a few thoughts run through my head. First comes, “Yeah, they had to do most of this in order to compete with Instructure. The holes they are filling are fairly clear.” Next comes, “I really didn’t believe they could pull some of this off at all, never mind as quickly and well as they seem to be doing it. Time will tell but…wow.” Then comes, “How the hell did they manage to get through a keynote with all of this in it and not blow people out of their chairs?” And finally, “Who would have thought in a million years that the LMS space could become interesting again?”
But there you have it. This is just a news post; the implications for Blackboard and the market are many and significant. Phil and I will have more to say about it in the days and weeks ahead. For now, the take-home message can be summed up thusly:
- Many Bothans died to bring you this enhancement.
The Fluid UI is one of the most exciting new features PeopleSoft has delivered in years. With Fluid UI, app developers build responsive user interfaces that run on smartphones, mini tablets, tablets and even desktops/laptops using the same PeopleTools components, pages, records and peoplecode they’ve been using for years. There’s one difference, with classic PeopleSoft, the UI is pixel perfect, meaning what you see on the page is exactly what you built in page designer. With Fluid UI the contents of a page is based on styles, and may change depending on the size of the device. It’s all still HTML, just a new way of laying out the fields.
PeopleSoft is gearing up for this big change and will deliver a new set of responsive, mobile ready applications soon. In keeping with PeopleSoft's continuous delivery strategy, the new content will be delivered in a PUM instance, available on the 9.2 product line.
A lot of developers from customers and partners will want to get up to speed and become familiar with how Fluid UI works. An easy path is to wait until the application content is available, and use that content to learn from. For those that just can’t wait, we’ll start a mini-blog series that shows you how to build your first Fluid component. Stay tuned. We’re off to talk about the great new features of 8.54 at Reconnect in Chicago, but after that we’ll begin showing you how it all works.
Five articles explaining how to use ZFS in the real world, by Oracle ACE Alexandre Borges.
Tech Article: Playing with ZFS Encryption -
Oracle Solaris 11 supports native encryption on ZFS so that it can protect critical data without depending on external programs. It's also integrated with the Cryptographic Framework. Alexandre explains the benefits of these and other Oracle Solaris encryption capabilities, and the different methods for encrypting and decrypting files, file systems, and pools.
Article - Building Bridges - Accept the existence of silos in large organizations, but build bridges between them and, incentives to use those bridges, by my colleague Bob Rhubart, manager of OTN's Architect Community.
Organizational silos thwarting IT architecture goals? Put away the sledgehammer.
Java Community- Blog - Oracle releases #JavaSE 8 Update 11 and Java SE 7 Update 65 - Developers can download the latest Java SE JDK and JRE the Oracle Technology Network.
Java Magazine: The July/August issue of Java Magazine explores the Java Virtual Machine (JVM), and includes a JavaOne preview.
RT @OracleAcademy: Where Are the #Women in Makerspaces? #WomeninSTEM #gendergap #tech - Read more here.
Database Community -
Hey Hey! Oracle has published it's Critical Patch Advisory for July 2014. Get it Here. Send to Your Friends!
Web Launch Replay- Oracle Big Data SQL - Bringing Structured Queries to an Unstructured World. Oracle has just launched Oracle Big Data SQL.
Pre-Built Developer VMs (for Oracle VM VirtualBox) -
Learning your way around a new software stack is challenging enough
without having to spend multiple cycles on the install process. Instead,
we have packaged such stacks into pre-built Oracle VM VirtualBox
appliances that you can download, install, and experience as a single
unit. Just downloaded/assemble the files, import into VirtualBox
(available for free), import, and go (but not for production use or
Aside from the techniques they use, the most dangerous tool hackers have at their disposal is the ability to network with organized criminal syndicates.
Many experienced deviants who have made an unorthodox, yet profitable career out of unlawful behavior have realized that the Internet provides them with relatively safe avenues to steal money. These figures hold no biases regarding who they target, attacking enterprise servers and consumer computers.
The best way to deter these persistent criminals from succeeding is by employing database activity monitoring, malware detection software and staff members skilled in the craft of information protection. The latter factor is particularly important, as those who have encountered aggressive cyberattacks likely know how to defend networks against them.
The strength of a network
According to PC World, French and Romanian officials razed a cybercriminal organization comprised of Romanian citizens, who used malware to infect the databases of money transfer enterprises in Germany, Norway, the United Kingdom, Austria and Belgium. European law enforcement agency Europol noted the figures used remote access Trojans to infiltrate the systems, allowing them to conduct unsanctioned transactions.
The Romanian Directorate for Investigating Organized Crime and Terrorism (DIICOT), reported that the illicit organizations would deliver fictitious money transfers from sham people to real recipients. In one instance, a franchisor lost $800,000 as a result of the scheme.
Cybercriminals are recognizing that enterprises have been tightening database security in response to such attacks, leading them to utilize more sophisticated techniques. ZDNet contributor Charlie Osborne referenced Gyges, a form of espionage malware engineered by government developers, as being one of the most difficult deployments to detect.
She cited a recent report conducted by Sentinel Labs, which surmised that the malicious software likely originated from Russia and is "virtually invisible." The program can remain active for long periods of time, unbeknown to victims. Hackers are now reengineering Gyges to create more advanced ransomware and rootkits, the latter of which are codes that shield covert processes from detection.
One of the characteristics that makes Gyges so tricky is its ability to infiltrate systems when users remain inactive, a significant digression from processes employed by conventional malware. In addition, Gyges is capable of transporting other forms of malicious code that can be initiated once the desired target has been reached.
Between organized criminal networks and government-grade malware at the disposal of cybercriminals, it's safe to say organizations need to find ways to optimize their database protection.
The post Cybercriminals using more tools, are better connected appeared first on Remote DBA Experts.