Skip navigation.

Feed aggregator

Invoking the Billing API for Bluemix Public Organizations

Pas Apicella - 16 hours 37 min ago
The ability to view usage data from a billing perspective on IBM Bluemix Public is available as a REST based API. To use it follow the steps below.

In order to use the API you have to have the Billing Manager Role or be the Account Owner of the organization as shown below.



Steps

1. Log into the PUBLIC Bluemix region as shown below

pasapicella@Pas-MacBook-Pro:~$ cf login -u pasapi@au1.ibm.com -p ***** -o pasapi@au1.ibm.com -s dev
API endpoint: https://api.ng.bluemix.net
Authenticating...
OK

Targeted org pasapi@au1.ibm.com

Targeted space dev

API endpoint:   https://api.ng.bluemix.net (API version: 2.40.0)
User:           pasapi@au1.ibm.com
Org:            pasapi@au1.ibm.com
Space:          dev

2. List all your Organizations as shown below

pasapicella@Pas-MacBook-Pro:~$ cf orgs
Getting orgs as pasapi@au1.ibm.com...

name
iwinoto@au1.ibm.com
vralh@au1.ibm.com
MobileQualityAssurance
arthur.proestakis@au1.ibm.com
abentley@au1.ibm.com
shawmale@au1.ibm.com
ANZ-Innovation-Lab
pasapi@au1.ibm.com
NAB Experimentation
Telstra-CustomerA

3. Determine the GUID of the Org you wnat to get metering usage from

pasapicella@Pas-MacBook-Pro:~$ cf org pasapi@au1.ibm.com --guid
e270a605-978e-45fc-9507-00a50dec2469

4. Determine the region name for the PUBLIC instance your connected to as follows

pasapicella@Pas-MacBook-Pro:~$ curl http://mccp.ng.bluemix.net/info
{
  "name": "Bluemix",
  "build": "221004",
  "support": "http://ibm.com",
  "version": 2,
  "description": "IBM Bluemix",
  "authorization_endpoint": "https://mccp.ng.bluemix.net/login",
  "token_endpoint": "https://mccp.ng.bluemix.net/uaa",
  "min_cli_version": null,
  "min_recommended_cli_version": null,
  "api_version": "2.40.0",
  "app_ssh_endpoint": "ssh.ng.bluemix.net:2222",
  "app_ssh_host_key_fingerprint": null,
  "app_ssh_oauth_client": "ssh-proxy",
  "routing_endpoint": "https://api.ng.bluemix.net/routing",
  "logging_endpoint": "wss://loggregator.ng.bluemix.net:443",
  "doppler_logging_endpoint": "wss://doppler.ng.bluemix.net:4443",
  "console_endpoint": "https://mccp.ng.bluemix.net/console",
  "region": "us-south"
}

5. Determine the OAUTH token for your connected session as follows

pasapicella@Pas-MacBook-Pro:~$ cf oauth-token
Getting OAuth token...
OK

bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI0ODAzOWM1My0yODZhLTQ5Y2YtYWIzYi0yNGVhZTY
4ZmFmYzIiLCJzdWIiOiJiNmMwMjBiNC1lMTFhLTQ2MzAtYTZhMi0zZjIwZmNlYzdmOTAiL
CJzY29wZSI6WyJjbG91ZF9jb250cm9sbGVyLnJlYWQiLCJwYXNzd29yZC53cml0ZSIsImNsb3
VkX2NvbnRyb2xsZXIud3JpdGUiLCJvcGVuaWQiXSwiY2xpZW50X2lkIjoiY2YiLCJjaWQiOiJj
ZiIsImF6cCI6ImNmIiwiZ3JhbnRfdHlwZSI6InBhc3N3b3JkIiwidXNlcl9pZCI6ImI2YzAyMGI0L
WUxMWEtNDYzMC1hNmEyLTNmMjBmY2VjN2Y5MCIsIm9yaWdpbiI6InVhYSIsInVzZXJf
bmFtZSI6InBhc2FwaUBhdTEuaWJtLmNvbSIsImVtYWlsIjoicGFzYXBpQGF1MS5pYm0uY29t
IiwicmV2X3NpZyI6IjVjOGMyODQ4IiwiaWF0IjoxNDU1MDU3NzQxLCJleHAiOjE0NTUxMD
A5NDEsImlzcyI6Imh0dHBzOi8vdWFhLm5nLmJsdWVtaXgubmV0L29hdXRoL3Rva2VuIiwiem
lkIjoidWFhIiwiYXVkIjpbImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCIsImNmIiwib3Blbm
lkIl19.EUEIXZ-XgxQbvTQnSgrToODHbNmKvhx0PtAp9CaiPTk

At this point we are ready to invoke the Billing/Metering API. The format is as follows

Bluemix Endpoint:

https://rated-usage.ng.bluemix.net/v2/metering/organizations/us-south:ORG_ID/YYYY-MM

  1. ORG_ID : Account GUID
  2. YYYY-MM: Year and month for which usage is required
Format as follows:

curl -v -X GET -H "Authorization: bearer {oauth-token}" "https://rated-usage.ng.bluemix.net/v2/metering/organizations/us-south:e270a605-978e-45fc-9507-00a50dec2469/usage/2016-02" | python -m json.tool
 
6. To invoke using curl we should do it as follows

Output:

pasapicella@Pas-MacBook-Pro:~$ curl -v -X GET -H "Authorization: bearer {oauth-token}" "https://rated-usage.ng.bluemix.net/v2/metering/organizations/us-south:e270a605-978e-45fc-9507-00a50dec2469/usage/2016-02" | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 75.126.70.44...
* Connected to rated-usage.ng.bluemix.net (75.126.70.44) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate: *.ng.bluemix.net
* Server certificate: DigiCert SHA2 Secure Server CA
* Server certificate: DigiCert Global Root CA
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0> GET /v2/metering/organizations/us-south:e270a605-978e-45fc-9507-00a50dec2469/usage/2016-02 HTTP/1.1
> Host: rated-usage.ng.bluemix.net
> User-Agent: curl/7.43.0
> Accept: */*
> Authorization: bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiI0ODAzOWM1My0yODZhLTQ5Y2YtYWIzYi0yNGVhZTY4ZmFmYzIiLCJzdWIiOiJiNmMwMjBiNC1lMTFhLTQ2MzAtYTZhMi0zZjIwZmNlYzdmOTAiLCJzY29wZSI6WyJjbG91ZF9jb250cm9sbGVyLnJlYWQiLCJwYXNzd29yZC53cml0ZSIsImNsb3VkX2NvbnRyb2xsZXIud3JpdGUiLCJvcGVuaWQiXSwiY2xpZW50X2lkIjoiY2YiLCJjaWQiOiJjZiIsImF6cCI6ImNmIiwiZ3JhbnRfdHlwZSI6InBhc3N3b3JkIiwidXNlcl9pZCI6ImI2YzAyMGI0LWUxMWEtNDYzMC1hNmEyLTNmMjBmY2VjN2Y5MCIsIm9yaWdpbiI6InVhYSIsInVzZXJfbmFtZSI6InBhc2FwaUBhdTEuaWJtLmNvbSIsImVtYWlsIjoicGFzYXBpQGF1MS5pYm0uY29tIiwicmV2X3NpZyI6IjVjOGMyODQ4IiwiaWF0IjoxNDU1MDU3NzQxLCJleHAiOjE0NTUxMDA5NDEsImlzcyI6Imh0dHBzOi8vdWFhLm5nLmJsdWVtaXgubmV0L29hdXRoL3Rva2VuIiwiemlkIjoidWFhIiwiYXVkIjpbImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCIsImNmIiwib3BlbmlkIl19.EUEIXZ-XgxQbvTQnSgrToODHbNmKvhx0PtAp9CaiPTk
>
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0< HTTP/1.1 200 OK
< X-Backside-Transport: OK OK
< Connection: Keep-Alive
< Transfer-Encoding: chunked
< Content-Type: application/json; charset=utf-8
< Date: Tue, 09 Feb 2016 22:54:44 GMT
< Etag: W/"3bcc-JgmFioUYI4v46tUXnGY1SQ"
< Vary: Accept-Encoding
< X-Cf-Requestid: 7b5cea8c-1a24-4114-44b2-a45e5d6e6f40
< X-Heap-Used: 136304240
< X-Instance-Id: 657c5e04638a49788a1053e7bb4e22ff
< X-Instance-Index: 5
< X-Node-Version: v0.10.41
< X-Powered-By: Express
< X-Process-Id: 93
< X-Response-Time: 3374.537ms
< X-Uptime: 16055
< X-Client-IP: 124.180.37.173
< X-Global-Transaction-ID: 750960253
<
{ [4055 bytes data]
100 15308    0 15308    0     0   3019      0 --:--:--  0:00:05 --:--:--  3967
* Connection #0 to host rated-usage.ng.bluemix.net left intact
{
    "organizations": [
        {
            "billable_usage": {
                "spaces": []
            },
            "currency_code": "AUD",
            "id": "e270a605-978e-45fc-9507-00a50dec2469",
            "name": "pasapi@au1.ibm.com",
            "non_billable_usage": {
                "spaces": [
                    {
                        "applications": [
                            {
                                "id": "121ccef0-2417-49c4-9f8f-47958b6d819d",
                                "name": "pas-bmspringboot-demo",
                                "usage": [
                                    {
                                        "buildpack": "0154f971-ae72-4882-9695-bda6e31310b7",
                                        "cost": 8.531996000805556,
                                        "quantity": 107.45586902777778,
                                        "runtime": {
                                            "id": "0154f971-ae72-4882-9695-bda6e31310b7",
                                            "name": "liberty-for-java_v2_1-20151006-0912"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "2d7dfb5f-0900-4c4a-a936-edaf3b7febb8",
                                "name": "pas-tonynode",
                                "usage": [
                                    {
                                        "buildpack": "f0bff590-8b49-4c7d-bc4a-3ff24adcd411",
                                        "cost": 8.531996000805556,
                                        "quantity": 107.45586902777778,
                                        "runtime": {
                                            "id": "f0bff590-8b49-4c7d-bc4a-3ff24adcd411",
                                            "name": "sdk-for-nodejs_v2_8-20151209-1403"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "3a962319-e7c4-456f-a2a4-b1f356a5d142",
                                "name": "pas-dotnet-helloworld",
                                "usage": [
                                    {
                                        "buildpack": "0a566654-d250-463e-b413-67782482e903",
                                        "cost": 4.265998000402778,
                                        "quantity": 53.72793451388889,
                                        "runtime": {
                                            "id": "0a566654-d250-463e-b413-67782482e903",
                                            "name": "aspnet5-experimental"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "54629864-0e43-488f-bfca-3f9c9d806de6",
                                "name": "pas-mysql-local",
                                "usage": [
                                    {
                                        "buildpack": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                        "cost": 7.498824610083008,
                                        "quantity": 94.44363488769531,
                                        "runtime": {
                                            "id": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                            "name": "java_buildpack"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "59f15702-1c42-444b-a1fb-94fbaf6cb27a",
                                "name": "pas-mobile-web",
                                "usage": [
                                    {
                                        "buildpack": "0154f971-ae72-4882-9695-bda6e31310b7",
                                        "cost": 8.531996000805556,
                                        "quantity": 107.45586902777778,
                                        "runtime": {
                                            "id": "0154f971-ae72-4882-9695-bda6e31310b7",
                                            "name": "liberty-for-java_v2_1-20151006-0912"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "926900dd-ccd7-4442-8f58-413df2bc0237",
                                "name": "pas-mongodb-local",
                                "usage": [
                                    {
                                        "buildpack": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                        "cost": 7.498824610083008,
                                        "quantity": 94.44363488769531,
                                        "runtime": {
                                            "id": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                            "name": "java_buildpack"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "ab5a3278-a1c1-44f6-9113-713a4d800131",
                                "name": "bluemix-apples-springboot",
                                "usage": [
                                    {
                                        "buildpack": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                        "cost": 8.531996000805556,
                                        "quantity": 107.45586902777778,
                                        "runtime": {
                                            "id": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                            "name": "java_buildpack"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "b448fd3a-5350-47d2-820d-7f739a057f22",
                                "name": "pas-SpringBootJARDemo",
                                "usage": [
                                    {
                                        "buildpack": "eb0b11e9-8982-4b93-adcb-7350d0bf2ae4",
                                        "cost": 8.531996000805556,
                                        "quantity": 107.45586902777778,
                                        "runtime": {
                                            "id": "eb0b11e9-8982-4b93-adcb-7350d0bf2ae4",
                                            "name": "liberty-for-java_v2_3-20151208-1311"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },
                            {
                                "id": "b7d3d442-5546-41b4-b5c0-4ef737734e7b",
                                "name": "pas-sb-elastic",
                                "usage": [
                                    {
                                        "buildpack": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                        "cost": 7.498824610083008,
                                        "quantity": 94.44363488769531,
                                        "runtime": {
                                            "id": "dac36860-94be-495a-96f5-d81d79c2ef3f",
                                            "name": "java_buildpack"
                                        },
                                        "unit": "GB-HOURS",
                                        "unitId": "GB_HOURS_PER_MONTH"
                                    }
                                ]
                            },


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Partner Webcast: 5 Things to Consider When Upgrading Your Legacy Portal & Content Environments to Oracle WebCenter 12c

WebCenter Team - Tue, 2016-02-09 08:54

5 Things to Consider When Upgrading Your Legacy Portal & Content Environments to Oracle WebCenter 12c

Thursday, February 25, 2016 | 1 PM EST / 10 AM PST 

Too many companies still rely on legacy systems, or other outdated platforms, that are not capable of supporting the new demands of modern business. By effectively addressing the issues of transitioning from fragmented and disparate applications to a single integrated global system, your system will lower total cost of ownership and avoid end-of-life support and maintenance deadlines.
TekStream brings tribal knowledge of BEA WebLogic Portal, BEA AquaLogic User Interaction (ALUI), WebCenter Interaction (WCI), Plumtree, Stellent, Universal Content Management (UCM), Optika, Imaging and Process Management (IPM), and FatWire into one centrally manageable platform, Oracle WebCenter.
With the recent introduction of the Oracle WebCenter 12c platform, here are 5 things that you should consider when upgrading your legacy environment.
  1. How do I know if I need to upgrade?
  2. How long will an upgrade or WebCenter implementation take?
  3. What's the difference between Oracle WebCenter 12c and what I'm currently using?
  4. How much will my project cost?
  5. What will be delivered with an upgrade?

Our specialized, three-step assessment, called QuickStream, aligns business stakeholders and IT organizations using a proven and practical methodology. By providing answers to critical questions like the ones above before a project begins, QuickStream helps organizations avoid the negative business outcomes of IT project failures due to unmet quality, cost and expectations.

Register Today

Weblogic: GC Log Generation

Online Apps DBA - Tue, 2016-02-09 06:02
This entry is part 6 of 6 in the series WebLogic Server

weblogic

This post covers about GC log generation that is Garbage collection log generation in WebLogic and is must read if you are learning WebLogic.

We cover this GC log generation topic in our Oracle WebLogic Training with other topics (such as creating WebLogic domain, managed servers, clustering,deployment, logging, JMS, JTA, JDBC, JMX or security, Performance tuning and Troubleshooting).

GC Log Generation

 

1. Make java file by using below steps.

vi JavaHeapVerboseGCTest.java

2. Write below sample code in JavaHeapVerboseGCTest.java file.

pimport java.util.Map;

import java.util.HashMap;

/**

* JavaHeapVerboseGCTest

* @author Pierre-Hugues Charbonneau

*

*/

public class JavaHeapVerboseGCTest {

private static Map<String, String> mapContainer = new HashMap<String, String>();

/**

* @param args

*/

public static void main(String[] args) {

System.out.println(“Java 7 HotSpot Verbose GC Test Program v1.0″);

System.out.println(“Author: Pierre-Hugues Charbonneau”);

System.out.println(“http://javaeesupportpatterns.blogspot.com/”);

String stringDataPrefix = “stringDataPrefix”;

// Load Java Heap with 3 M java.lang.String instances

for (int i=0; i<3000000; i++) {

String newStringData = stringDataPrefix + i;

mapContainer.put(newStringData, newStringData); }

3. Compile your java program by using below command.

 javac JavaHeapVerboseGCTest.java

4. Execute your java program by using below command.

java –verbose:gc JavaHeapVerboseGCTest

  Output:

209

If you want to learn more on WebLogic like above or wish to discuss challenges you are hitting in Oracle WebLogic Server, register for our Oracle WebLogic Administration Training.

We are so confident on quality and value of our training that We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

Did you subscribe to our YouTube Channel (435 already subscribed) and Private FaceBook Group (666 Members) ?

The post Weblogic: GC Log Generation appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

LittleArduinoProjects#180 Colpitts Oscillator

Paul Gallagher - Tue, 2016-02-09 05:50
A Colpitts oscillator uses a combination of inductors and capacitors to produce an oscillation at the resonant frequency of LC circuit.

To see that in action, I built one on a protoboard and it delivers an almost perfect 22.9kHz .. compared to the theoretical 22.5kHz.

As always, all notes and code are on GitHub.

Here's a trace of the output signal on CH1, and the mid-point of the capacitor pair on CH2:

Node in the Cloud: Oracle DBaaS, App Container Cloud and node-oracle

Christopher Jones - Tue, 2016-02-09 01:21

The node-oracledb driver is pre-installed on the Oracle Application Container Cloud when you create a Node service! Yay!

I've posted a video on deploying a Node application to the cloud and connecting to Oracle Database Cloud Service. (Blatent plug: subscribe to the YouTube channel!)

The brief summary is that I developed a Node application in my local environment. I then created a database service, I zipped all the JavaScript files along with a manifest telling the App Container Cloud which source file to run, amd this zip was uploaded to a Node cloud service. DB credentials are referenced in the app by environment variables; the variables are made available by the App Container Cloud when a DBaaS instance is associated with it.

You can try it all out by applying for a 30 day free trial on the Oracle Cloud.

All the JavaScript modules except native add-ons like node-oracledb should be included in your application zip bundle - you might have been developing on a different OS than used in the container so native adds-on won't work. The container simply unzips your bundle and runs. It will find the node-oracledb installed globally on the container just fine.

Making Lab Sections Interactive: More evidence on potential of course redesign

Michael Feldstein - Mon, 2016-02-08 19:51

By Phil HillMore Posts (385)

Two weeks ago Michael and I posted an third article on EdSurge that described an encouraging course redesign for STEM gateway courses.

In our e-Literate TV series on personalized learning, we heard several first-hand stories about the power of simple and timely feedback. As described in the New York Times, administrators at the University of California, Davis, became interested in redesigning introductory biology and chemistry courses, because most of the 45 percent of students who dropped out of STEM programs did so by the middle of their second year. These students are the ones who typically take large lecture courses.

The team involved in the course-redesign projects wanted students to both receive more individual attention and to take more responsibility for their learning. To accomplish these goals, the team employed personalized learning practices as a way of making room for more active learning in the classroom. Students used software-based homework to experience much of the content that had previously been delivered in lectures. Faculty redesigned their lecture periods to become interactive discussions.

The UC Davis team focused first on redesigning the lab sections to move away from content delivery (TAs lecturing) to interactive sessions where students came to class prepared and then engaged in the material through group discussions (read the full EdSurge article for more context). In the UC Davis case, this interactive approach was based on three feedback loops:

  • Immediate Feedback: The software provides tutoring and “immediate response to whether I push a button” as students work through problems, prior to class.
  • Targeted Lecture and Discussion: The basic analytics showing how students have done on the pre-lab questions allows the TA to target lecture and discussion in a more personal manner—based on what the specific students in that particular section need. “I see the questions that most of my class had a difficulty with, and then I cover that in the next discussion,” Fox says.
  • Guidance: The TA “would go over the answers in discussion.” This occurs both as she leads an interactive discussion with all students in the discussion section and as she provides individual guidance to students who need that help.

Formative Feedback Loops UCD

The opportunity to make the lab sections truly interactive, and not just one-way content delivery through lectures, is not unique to the UC Davis example. Shortly after publishing the article, I found another course redesign that plays on some of the same themes. This effort at Cal State Long Beach (CSULB) was described in the Press-Telegram article:

Sitting near a skeleton in a Cal State Long Beach classroom last week, Professor Kelly Young dissected a course redesign that transformed a class from a notorious stumbling block to a stepping stone toward graduation.

Young has reduced the number of students failing, withdrawing or performing below average in Bio 208: Human Anatomy from 50 percent to fewer than 20 percent in about four years, and poorly performing students have watched their grades climb, with continued improvement on the horizon.

That statistic is worth exploring, especially when considering that 500-600 students take this class each year at CSULB.

Thanks to the CSULB Course Redesign work, this work in Bio 208 has some very useful documentation available on the MERLOT repository. Like the UC Davis team, the CSULB team first redesigned the lab sections, “flipping them” to enable a more personalized approach within the small sections. Unlike UC Davis, CSULB centered the content on videos and podcasts.

While we have been working on refining the lecture over the past several years, the Redesign Project has allowed us to get serious about redesigning the laboratory (the source of low grades for most of the students). During the semester, students learn over 1,500 structures just in the laboratory portion of the course. Despite asking them to look at the material before class, students would routinely come to the laboratory session totally unprepared. Flipping the class was an enticing solution to increase preparedness- and therefore success.

BIOL_208_Bones_of_the_Thorax

After trial and error over a few years, the team has created a series of “Anatomy on Demand” annotated videos. But as the team pointed out, this is not the actual important factor.

While the videos often get attention in a flipped classroom proposal, the true focus of our project is what we do with the newly-created class time in the laboratory provided by flipping the lectures. The most important aspect of this project is our new interactive laboratory sessions that serve to deepen understanding of the material. The idea is that a student will watch the relevant short videos (usually 5-7 per week) prior to coming to the laboratory, arrive prepared to their laboratory, take a short quiz that is reduced in rigor but assures readiness, and then spend at least two hours in the laboratory exploring the structures in detail at interactive small group stations.

The effect has been that students are moving from receiving introductions to material and now participating in critical thinking in the lab.

This new method allows prepared students to deeply interact with the material, as opposed to merely being introduced to it. In previous years, we hoped to have students leave the laboratory with some rote memorization of the structures complete. In contrast, when students arrive with a basic understanding of the structures, we are able to use laboratory time to ask application and critical thinking questions.

After applying multiple redesign elements and interventions, the CSULB team started seeing impressive results, especially starting in Spring 2014. This is where they are tracking the reduction in percentage of students getting D, F, or Withdraw from almost 50% to approximately 20%.

phpThumb

Both of these course redesigns were led by university faculty and staff and are showing impressive results. Not just in grades but in deeper student learning. Kudos to both the UC Davis team and the CSULB team.

The post Making Lab Sections Interactive: More evidence on potential of course redesign appeared first on e-Literate.

Interaction Hub Image 2 Now Available

PeopleSoft Technology Blog - Mon, 2016-02-08 18:42

As we reported not last June, the PeopleSoft Interaction Hub now uses the PeopleSoft Update Manager to deliver all updates and maintenance.  Customers can now take advantage of the Selective Adoption process when updating their Hub system.  This puts the Hub in alignment with all other PeopleSoft applications.  We are happy to announce that image 2 is now generally available from the PeopleSoft Update Manager home page. (Choose the Upate Image Home Pages tab, then choose the Interaction Hub Update Image page.)

Here are a few of the valuable features included in this image that customers should consider:

  • Guided process for branding.  The Hub offers a simple guided process that enables customers to do simple branding of their Fluid UI-based home pages and headers. Using this quick and easy process, administrators can set the header logo, banner color and text, background image and color, and more.  The administrator can also set roles to determine which branding themes are seen by which roles within an enterprise.  In addition, once the branding theme is set, you can publish your branding theme across all PeopleSoft applications in the wizard at the push of a button!  The Hub also gets its own delivered theme, but of course it's easy to create your own.
  • Administrator Landing Page.  The Hub delivers a landing page from which an administrator can monitor the health and performance of their PeopleSoft ecosystem.  This may also be where the administrator performs branding activities.
  • Guest Landing Page.  This page can be assigned to Guest roles for people that don't have full access to a system.
  • Navigation Collections.  Fluid Navigation Collections were actually implemented in PeopleTools 8.55.  However, Nav Collections have been used extensively in Hub designs, so now customers can design Fluid Hubs with home pages that use Nav Collections to streamline user navigation.

See the Update Image Home Page for complete details about this image.  

In addition, review the Planned Features page on My Oracle Support for updates regarding the Hub and a look at what we have planned for future images. 

Multiple PeopleSoft Applications: Using the Interaction Hub or Not

PeopleSoft Technology Blog - Mon, 2016-02-08 16:25

Many PeopleSoft customers have multiple PeopleSoft applications.  We often refer to this type of environment as a ‘cluster’.  Customers have different types of clusters, some using the Interaction Hub to unify the cluster and some without the Hub.  We are often asked how these are different.  Should a customer use the Interaction Hub for clusters?  Can you get by without it?  

The scenario I’m describing in this post relates to PeopleTools 8.55 and it’s use with applications using the Fluid header (whether the content is Classic or Fluid).

What is a Portal System in PeopleSoft?

This can be somewhat confusing because the PeopleSoft Interaction Hub was originally called the ‘PeopleSoft Enterprise Portal’, which was designed to be used as a traditional portal, aggregating content from multiple sources.  People still often refer to that product as simply ‘The Portal’.  The Interaction Hub still provides that traditional, aggregating functionality.  However, PeopleSoft also uses the term ‘Portal System’ to mean more than just the Interaction Hub system connecting to content systems like HCM, FSCM, and Campus Solutions.  In PeopleSoft, whatever system the user logs into first is considered to be acting as the Portal system.  For example if Single Sign-On (SSO) is set up between HCM and Campus Solution and the user logs into the HCM system and from there access Campus solution then the HCM system acts as the Portal system. 

What Does This Mean to My Users?

Let’s look at a specific Example.  In this scenario, the customer is NOT using the Interaction Hub, but they have PeopleSoft HCM and FSCM and have SSO set up for these applications.  A user of this system logs into HCM and is presented with an HCM-based Employee Home Page.  That home page contains a tile for FSCM content, and there is other FSCM content represented in the Nav Bar.  When the user selects an FSCM tile, they are presented with the FSCM content, but the header context remains in HCM. This means that the Nav Bar will still be the one from the HCM system.  It also means that if the user employs Global Search, the results will only come from HCM. Similarly, any notifications/alerts/actions they receive will originate from HCM only.  Basically, the partitioning between the applications is maintained.  If users want to search in FSCM, they must log out of HCM and log into FSCM, which at that point will become the acting portal system.  

Now let’s look at an example in which the customer is using the Interaction Hub to unify the cluster including HCM and FSCM.  In this scenario, the user logs into the Interaction Hub system, which provides the header and context.  The home page displayed initially can come from HCM or FSCM (or any application), and the Nav Bar can contain content from any PeopleSoft application where the content from these applications are registered.  When the user navigates to any content--whether it comes from the Interaction Hub, HCM or FSCM--the header and the Nav Bar is from the Interaction Hub system, which is where the user originated.   In this case, even when the user navigates to an HCM or FSCM transaction page, they can still use the header or the Nav Bar to navigate to any content in the cluster seamlessly.  In addition, if the user executes a search, the results of that search can come from any application in the cluster—in this case HCM, FSCM, or the Interaction Hub.  Furthermore, the user can take action from those search results using Related Actions regardless of where that content originated.  Similarly, the notifications in the header will deliver actions and alerts from all content systems in the cluster, and users can act on those notifications from the notifications window.

What We Recommend

Customers that have a cluster of multiple PeopleSoft applications should use the Interaction Hub to unify their user experience.  The Hub enables you to break down the barriers between applications and align your user experience with your business processes.  This enables your users to navigate freely among PeopleSoft content (and even external content) to complete their work--without having to know which PeopleSoft application is being used.  In addition, users don’t have to log in and out of different applications to complete business processes that may cross application boundaries.  Instead, the Hub presents the PeopleSoft cluster as a single unified ecosystem.

Resolving Update-Update Conflict in Peer-to-Peer Replication

Pythian Group - Mon, 2016-02-08 15:20

Recently I had received a hand-off ticket which was about a replication issue. The system has been configured with the replication of Peer-to-Peer type.

One of the subscribers was throwing an error which was reading like A conflict of type ‘Update-Update’ was detected at peer 4 between peer 100 (incoming), transaction id 0x000000000011a1c3 and peer 100 (on disk). While I was working on this issue and trying to resolve it, I noticed that it wasn’t showing any records in msrepl_errors table or conflict_dbo_table. p2p1 p2p2

Here again, the Error Logs help, as they have the complete details logged in, which help us identify the table name and exact error with the record. If that hadn’t been the case, I would have followed the Replication Troubleshooting method describe in KB 3066750 to fix the issue.

At this time, with the information we had in hand, we reached out to the customer and resolved the issue by fixing it manually. I would like to mention that there are always two ways conflicts are handled in P2P replication:

1) Manually fix the conflict/data issue
2) Let the winner node have precedence about the data/conflict In P2P replication

At the time of configuration, we will have an option to choose which node will have precedence and can be declared the winner. This is decided by the way of originator_i; the highest originator_id will win. We will have to decide this carefully, as once the setup is done, orginator_id is allotted it can not be altered later.

Here are few reference article that will help you understand this topic better:

https://msdn.microsoft.com/en-IN/library/bb934199%28v=sql.105%29.aspx
https://msdn.microsoft.com/en-IN/library/ms151191%28v=sql.105%29.aspx http://blogs.msdn.com/b/change_sql_virtual_ip_from_dhcp_to_static_ip/archive/2015/11/04/conflicts-in-peer-to-peer-replication.aspx
http://blogs.msdn.com/b/repltalk/archive/2010/02/07/repltalk-start-here.aspx

 

Categories: DBA Blogs

New ORAchk 12.1.0.2.6 beta

Pythian Group - Mon, 2016-02-08 15:17

 

Oracle recently released new beta 12.1.0.2.6 version for the ORAchk utility. If you are an Oracle DBA and still not friendly with the utility, I advise you to try it out. In short, the utility is a proactive tool and scan your system for known issues providing an excellent report in html format. In addition to that, you are getting collection manager to manage reports for multiply databases, check for upgrade readiness and other features. I strongly recommend trying the utility and using it regularly.
You can download the new version of the ORAchk, Health Check Catalog and all related support files and guides from Oracle support (Document 1268927.2). Simply unzip the ORAchk to a directory and run the orachk preferably as root since it allows to execute all system wide checks. Here is an example:

[oracle@bigdatalite u01]$ mkdir orachk
[oracle@bigdatalite u01]$ cd orachk/
[oracle@bigdatalite orachk]$ unzip ../distr/orachk121026.zip
Archive: ../distr/orachk121026.zip
inflating: CollectionManager_App.sql
inflating: sample_user_defined_checks.xml
creating: .cgrep/
................
[oracle@bigdatalite orachk]$ su -
[root@bigdatalite ~]# cd /u01/orachk/
[root@bigdatalite orachk]# ./orachk

At the end you are getting an html report and zip file with results of all executed checks:

Detailed report (html) – /u01/orachk/orachk_bigdatalite_orcl_012816_151905/orachk_bigdatalite_orcl_012816_151905.html

UPLOAD(if required) – /u01/orachk/orachk_bigdatalite_orcl_012816_151905.zip

The report is really good looking, split to different sections, and allows you to hide or show checks based on their status.

I compared the new 12.1.0.2.6 version against 12.1.0.2.5. The execution time for the new version was 3 minutes versus 8 minutes for the old one. The new format for report was way more usable; you don’t need to jump back and forth since result for every check expand on the same place.
If you haven’t used the utility so far I highly recommend you download and try it out.

Categories: DBA Blogs

Log Buffer #459: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-02-08 14:40

This Log Buffer Edition arranges few tips and tricks from the blogs of Oracle, SQL Server and MySQL.

Oracle:

Oracle ® Solaris innovation is due in part to the UNIX® the standard (1), the test suites (2) and the certification (3). By conforming to the standard, using the test suites and driving to certification, Oracle ® Solaris software engineers can rely on stable interfaces and an assurance that any regressions will be found quickly given more than 50,000 test cases.

Building on the program established last year to provide evaluation copies of popular FOSS components to Solaris users, the Solaris team has announced the immediate availability of additional and newer software, ahead of official Solaris releases.

Tracing in Oracle Reports 12c.

Issues with Oracle Direct NFS.

An interesting observation came up on the Oracle-L list server a few days ago that demonstrated how clever the Oracle software is at minimising run-time work, and how easy it is to think you know what an execution plan means when you haven’t actually thought through the details – and the details might make a difference to performance.

SQL Server:

Manipulating Filetable Files Programatically

Auto-suggesting foreign keys and data model archaeology

Create/write to an Excel 2007/2010 spreadsheet from an SSIS package.

Tabular vs Multidimensional models for SQL Server Analysis Services.

The PoSh DBA – Towards the Re-usable PowerShell Script.

MySQL:

MyRocks vs InnoDB with Linkbench over 7 days.

MySQL has been able to harness the potential of more powerful (CPU) and larger (RAM, disk space.

Setup a MongoDB replica/sharding set in seconds.

MySQL 5.7 makes secure connections easier with streamlined key generation for both MySQL Community and MySQL Enterprise, improves security by expanding support for TLSv1.1 and TLSv1.2, and helps administrators assess whether clients are connecting securely or not.

While EXPLAIN shows the selected query plan for a query, optimizer trace will show you WHY the particular plan was selected. From the trace you will be able to see what alternative plans was considered, the estimated costs of different plans, and what decisions was made during query optimization.

Categories: DBA Blogs

NEW OTN Virtual Technlogy Summit Sessions Coming!

OTN TechBlog - Mon, 2016-02-08 12:36

Join us for free Hands-On Learning with Oracle and Community Experts! The Oracle Technology Network invites you to attend one of our latest next Virtual Technology Summits on March 8th, 15th and April 5th. Hear from Oracle ACEs, Java Champions and Oracle Product Experts, share their insights and expertise through Hands-on-Labs (HOL), highly technical presentations and demos. This interactive, online event offers four technical tracks:

Database: The database track provides latest updates and in-depth topics covering Oracle Database 12c Advanced Options, new generation application development with JSON, Node.js and Oracle Database Cloud, as well as sessions dedicated to the most recent capabilities of MySQL, benefiting both Oracle and MySQL DBAs and Developers.

Middleware: The middleware track offers developers focused on gaining new skills and expertise in emerging technology areas such as Internet of Thing (IoT), Mobile and PaaS. This track also provides latest updates on Oracle WebLogic 12.2.1.and Java EE.
Java: In this track, we will show you improvements to the Java platform and APIs. You’ll also learn how the Java language enables you to develop innovative applications using Microservices, parallel programming, integrate with other languages and tools, as well as insight for the APIs that will substantially boost your productivity. System: Designed for System Administrators this track covers best practices for implementing, optimizing, and securing your operating system, management tools, and hardware. In addition, we will also discuss best practices for Storage, SPARC, and Software Development.

Register Today -

March 8th, 2016 - 9:30am to 1:30pm PT / 12:30pm to 4:30pm ET / 3:30pm to 7:30pm BRT

March 15, 2016 - 9:30 a.m. to 1:30 p.m. IST / 12:00 p.m. to 4:00 p.m. SGT  / 3:00 p.m. to 7:00 p.m. AEDT

April 5, 2016 - 3:30 p.m. to 7:30 p.m. BRT  / 09:30 - 13:00 GMT (UK)  / 10:30 - 14:00 CET

Amazon Web Services (AWS) : Relational Database Services (RDS) for MySQL

Tim Hall - Mon, 2016-02-08 08:35

Here’s another video on my YouTube channel. This one is a quick run through of RDS for MySQL, a DBaaS offering from Amazon Web Services.

The video was based on this article.

If you watch the little outtake at the end you will hear me cracking up with the goofiest while filming Brian ‘Bex’ Huff‘s clip. :)

Cheers

Tim…

Amazon Web Services (AWS) : Relational Database Services (RDS) for MySQL was first posted on February 8, 2016 at 3:35 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Live Webcast: Driving IT Innovation with Cloud Self Service and Engagement

WebCenter Team - Mon, 2016-02-08 06:00
Oracle Corporation WEBCAST Driving IT Innovation with Cloud Self-Service and Engagement Re-defining IT’s role in Line of Business user empowerment

With 90% of IT budget still being spent on “Keeping the Lights On”, over 40% of organizations are now seeing lines of businesses taking on IT projects. Shadow IT projects, however, mean– additional integration, security and maintenance overheads and, service response lag times.

Join this webcast to learn how IT can leverage cloud to empower lines of business users, drive business agility and still maintain security and integrity. You will hear about:
  • Oracle’s holistic approach to collaboration, self-service and engagement
  • Demonstration of real life use of cloud solution for business user empowerment
  • Customer success stories and usage patterns
Register Now for this webcast. Red Button Top Register Now Red Button Bottom Live Webcast Calendar February 17, 2016 10:00 AM PT
/ 1:00 PM ET
#OracleDOCS #OraclePCS #OracleSCS

SPEAKER: David Le Strat David Le Strat,
Senior Director Product
Management,
Content and Process, Oracle

Using Python to ‘Wrangle’ Your Messy Data

Rittman Mead Consulting - Mon, 2016-02-08 00:00
 or How to Organize Your Comic Book Collection Based on Issue Popularity

In addition to being a product manager at Rittman Mead, I consider myself to be a nerd of the highest order. My love of comic books, fantasy, sci-fi and general geekery began long before the word ‘Stark’ had anything to do with Robert Downey Jr or memes about the impending winter. As such, any chance to incorporate my hobbies into my work is a welcomed opportunity. For my next few blog entries, I’ve decided to construct a predictive classification model using comic book sales data whose eventual goal will be to build a model that can accurately predict whether a comic will rocket off the shelves or if it will be a sales dud. The first blog of the series shows some of the pitfalls that can come up when preparing your data for analysis. Data preparation, or data wrangling as it has come to be known, is an imperfect process that usually takes multiple iterations of transformation, evaluation and refactoring before the data is “clean” enough for analysis.

While the steps involved in data wrangling vary based on the state and availability of the raw data, for this blog I have chosen to focus on the gathering of data from disparate sources, the enrichment of that data by merging their attributes and the restructuring of it to facilitate analysis. Comic book sales data is readily available on the interwebs, however finding that data in a usable format proved to be a more difficult task. In the end, I had to resort to dreaded process of screen scraping the data from a comic research site. For those of you who are lucky enough be unfamiliar with it, screen scraping is the process of programmatically downloading HTML data and stripping away that formatting to make it suitable for use. This is generally used as a last resort because web sites are volatile creatures that are prone to change their appearance as often as my teenage kids do preparing to leave the house. However, for the purposes of this series, as my friend Melvin the handyman would often say, “We works with what we gots.”

blog-ironman-pythonexclamation-point-icon-30522This leads us to the first issue you may run into while wrangling your data. You have access to lots of data but it’s not pretty. So make it pretty.  Working with raw data is like being a sculptor working with wood. Your job is not to change the core composition of the data to suit your purposes but to chip away at the excess to reveal what was there all along, a beautiful horse… er I mean insight. Sorry, I got lost in my analogy.  Actually to expand on this analogy a bit, the first tool I pulled out of my toolbox for this project was Python, the Leatherman of  programming languages.  Python is fast, plays well with other technologies and most importantly in this case, Python is ubiquitous. Used for tasks ranging from process automation and ETL to gaming and academic pursuits, Python is truly a multipurpose tool. As such, if you have a functional need, chances are there is a native module or someone has already written a public library to perform that function.  In my case, I needed some scripts to “scrape” HTML tables containing comic sales data and combine that data with other comic data that I retrieved elsewhere. The “other” data is metadata about each of the issues. Metadata is just data about data. In this case, information about who authored it, how it was sold, when it was published, etc… More on that later.  

blog-sales-tableLuckily for me, the format of the data I was scraping was tabular, so extracting the data and transforming it into Python objects was a relatively simple matter of iterating through the table rows and binding each table column to the designated Python object field. There was still a lot of unnecessary content on the page that needs to be ignored, like the titles and all of the other structural tags, but once I found the specific table holding the data, I was able to isolate it. At that point, I wrote the objects to to a CSV file, to make the data easy to transport and to facilitate usability by other languages and/or processes.

The heavy lifting in this process was performed by three different Python modules: urllib2, bs4 and csv. Urllib2, as the name implies, provides functions to open URLs. In this case, I found a site that hosted a page containing the estimated issue sales for every month going back to the early 1990’s. To extract each month without manually updating the hardcoded URL over and over, I created a script that accepted MONTH and YEAR as arguments, month_sales_scraper.py
blog-get-sales

The response from the urlopen(url) function call was the full HTML code that is typically rendered by a web browser. In that format, it does me very little good, so I needed to employ a parser to extract the data from the HTML. In this context, a parser is a program that is used to read in a specific document format, break it down into its constituent parts while preserving the established relationships between those parts, and then finally provide a means to selectively access said parts. So an HTML parser would allow me to easily access all the <TD> column tags for a specific table within an HTML document. For my purposes, I chose BeautifulSoup, or bs4.

BeautifulSoup provided search functions that I used to find the specific HTML table containing the sales data and loop through each row, while using the column values to populate a Python object.

blog-bs4

This Python object, named data, contains fields populated with data from different sources. The year and month are populated using the arguments passed to the module. The format field is dynamically set based on logic related to the rankings and the remaining fields are set based on their source’s position in the HTML table. As you can see, there is a lot of hard coded logic that would need to be updated, should the scraped site change their format. However, for now this logic gets it done.

The final step of this task was to write those Python objects to a CSV file. The python module, CSV, provides the function writerow(), which accepts an array as a parameter and writes each of the array elements as columns in the CSV.
blog-csv-write

My first pass raised the an exception because the title field contained unicode characters that the CSV writer could not handle.
blog-unicode-error

To rectify this, I had to add a check for unicode and encoded the content as UTF-8. Unicode and UTF-8 are character encodings; meaning they provide a map computers use to identify characters. This includes alphabets and logographic symbols from different languages as well as common symbols like ®.

blog-unicode-to-utf8

Additionally, there was the matter of reformatting the values of some of the numeric fields to allow math to be performed on them later(ie stripping ‘$’ and commas). Other than that, the data load went pretty smoothly. A file named (MONTH)_(YEAR).CSV was generated for each month. Each file turned out like so:

blog-sales-csv

While this generated tens of thousands of rows of comic sales data, it was not enough. Rather, it had the volume but not the breadth of information I needed. In order to make an accurate prediction, I needed to feed more variables to the model than just the comic’s title, issue number, and price. The publisher was not relevant as I decided to limit this exercise to only Marvel comics and passing in the the estimated sales would be cheating, as rank is based on sales. So to enhance my dataset, I pulled metadata about each of the issues down from “the Cloud” using Marvel’s Developer API. Thankfully, since the API is a web service, there was no need for screen scraping.

exclamation-point-icon-30522Retrieving and joining this data was not as easy as one might think. My biggest problem was that the issue titles from the scraped source were not an exact match to the titles stored in the Marvel database. For example, the scraped dataset lists one title as ‘All New All Different Avengers”. Using their API to search the Marvel database with that title retrieved no results. Eventually, I was able to manually find it in their database listed as “All-New All-Different Avengers”. In other cases, there were extra words like “The Superior Foes of Spider-Man” vs “Superior Foes of Spider-Man”. So in order to perform a lookup by name, I needed to know the titles as they expected them. To do this I decided to pull a list of all the series titles whose metadata was modified during the timeframes for which I had sales data. Again, I ran into a roadblock. The Marvel API only allows you to retrieve up to 100 results per request and Marvel has published thousands of titles. To get around this I had to perform incremental pulls, segmented alphabetically. 

blog-incremental-pulls

Even then there were a few minor issues, as some letters like ‘S’ had more than 100 titles. To get around that I had to pull the list for ‘S’ titles sorted ascending and descending then combine the results, making sure to remove duplicates. So my advice on this one is be sure to read up on the limitations of any API you are using. It may enforce limits but you may be able to work around the limits using creative querying.

blog-sticky-tab

At this point I have my list of Marvel series titles, stored in some CSV files that I eventually combined into a single file, MarvelSeriesList.csv, for ease of use. Actually, I have more than that. While retrieving the series titles, I also pulled down the ID for the series and an appropriateness rating. Searching the API by ID will be much more accurate than name and the appropriateness  rating may be useful when building out the prediction model. The next step was to iterate through each row of the CSVs we created from the sales data, find the matching ID from MarvelSeriesList.csv and use that ID to retrieve its metadata using the API.

exclamation-point-icon-30522If you remember, the point of doing that last step was that the titles stored in the sales data files don’t match the titles in the API, so I needed to find a way to join the two sources. Rather than writing cases to handle each of the scenarios (e.g. mismatched punctuation, extra filler words), I looked for a python library to perform some fuzzy matching. What I found was a extremely useful library called, Fuzzy Wuzzy. Fuzzy Wuzzy provides a function called extractOne() that allows you to pass in a term and compare it with an array of values. The extractOne() function will then return the term in the array that has the highest match percentage. Additionally, you can specify a lower bound for acceptable matches (ie. only return result where the match is >= 90%).

Again, it took a few passes to get the configuration to work effectively. The first time through about only about 65% of the titles in the sales file found a match. That was throwing away too much data for my liking so I had to look at the exceptions and figure out why the matches were falling through. One issue that I found was titles that tacked on the publication years in the Marvel database, like “All-New X-Men (2012)”, had a match score in the 80’s when matched against a sales title like, “All New X-Men”. This was a pretty consistent issue, so rather than lowering the match percentage, which could introduce some legitimate mismatches, I decided to strip the year, if present, on mismatches and run it through that matching process again. This got me almost there. The only other issue I ran into was Fuzzy Wuzzy had trouble matching acronyms/acrostics. So ‘S.H.E.I.L.D.’  had a match score in the 50’s when matching ‘SHIELD’. That’s because half the characters (periods) were missing. Since  there were only two titles affected, I built a lookup dictionary of special cases that needed to be translated. For the purposes of this exercise, I would still have had enough matches to skip that step, but by doing it brought us up to 100% matching between the two sources. Once the matching function was working, I pulled out urllib2 and retrieved all the metadata I could for each of the issues.

The resulting files contained not only sales data (title, issue number, month rank, estimated sales), but information about the creative team, issue descriptions, characters, release dates and  associated story arcs. This would be enough to get us started with building our predictive classification model.
blog-csv-all That being said, there was still a lot of structural rearranging required to make it ready for the type of analysis I wanted to do, but we will deal with that in the next blog. Hopefully,  you picked up some useful tips on how to combine data from different sources or at the very least found solace in knowing that while you may not be the coolest person in the world, somewhere out there is a grown man who still likes to read comic books enough to write a blog about it. Be sure to tune in next week, True Believers, as we delve into The Mysteries of R!

The post Using Python to ‘Wrangle’ Your Messy Data appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Resolving Hardware Issues with a Kernel Upgrade in Linux Mint

The Anti-Kyte - Sun, 2016-02-07 11:40

One evening recently, whilst climbing the wooden hills with netbook in hand, I encountered a cat who had decided that halfway up the stairs was a perfect place to catch forty winks.
One startled moggy later, I had become the owner of what I can only describe as…an ex-netbook.

Now, finally, I’ve managed to get a replacement (netbook, not cat).

As usual when I get a new machine, the first thing I did was to replace Windows with Linux Mint…with the immediate result being that the wireless card stopped working.

The solution ? Don’t (kernel) panic, kernel upgrade !

Support for most of the hardware out there is included in the Linux Kernel. The kernel is enhanced and released every few months. However, distributions, such as Mint, tend to stick on one kernel version for a while in order to provide a stable base on which to develop.
This means that, if Linux is not playing nicely with your Wireless card/web-cam/any other aspect of your machine’s hardware, a kernel upgrade may resolve your problem.
Obviously it’s always good to do a bit of checking to see if this might be the case.
It’s also good to have a way of putting things back as they were should the change we’re making not have the desired effect.

What I’m going to cover here is the specific issue I encountered with my new Netbook and the steps I took to figure out what kernel version might fix the problem.
I’ll then detail the kernel upgrade itself.

Machine details

The machine In question is an Acer TravelMate-B116.
It has an 11.6 inch screen, 4GB RAM and a 500GB HDD.
For the purposes of the steps that follow, I was able to connect to the internet via a wired connection to my router. Well, up until I got the wireless working.
The Linux OS I’m using is Linux Mint 17.3 Cinnamon.
Note that I have disabled UEFI and am booting the machine in Legacy mode.

Standard Warning – have a backup handy !

In my particular circumstances, I was trying to configure a new machine. If it all went wrong, I could simply re-install Mint and be back where I started.
If you have stuff on your machine that you don’t want to lose, it’s probably a good idea to back it up onto separate media ( e.g. a USB stick).
Additionally, if you are not presented with a grub menu when you boot your machine, you may consider running the boot-repair tool.
This will ensure that you have the option of which kernel to use if you have more than one to choose from ( which will be the case once you’ve done the kernel upgrade).

It is possible that upgrading the kernel may cause issues with some of the hardware that is working fine with the kernel you currently have installed, so it’s probably wise to be prepared.

Identifying the card

The first step then, is to identify exactly which wireless network card is in the machine.
From a terminal window …

lspci

00:00.0 Host bridge: Intel Corporation Device 2280 (rev 21)
00:02.0 VGA compatible controller: Intel Corporation Device 22b1 (rev 21)
00:0b.0 Signal processing controller: Intel Corporation Device 22dc (rev 21)
00:13.0 SATA controller: Intel Corporation Device 22a3 (rev 21)
00:14.0 USB controller: Intel Corporation Device 22b5 (rev 21)
00:1a.0 Encryption controller: Intel Corporation Device 2298 (rev 21)
00:1b.0 Audio device: Intel Corporation Device 2284 (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 22c8 (rev 21)
00:1c.2 PCI bridge: Intel Corporation Device 22cc (rev 21)
00:1c.3 PCI bridge: Intel Corporation Device 22ce (rev 21)
00:1f.0 ISA bridge: Intel Corporation Device 229c (rev 21)
00:1f.3 SMBus: Intel Corporation Device 2292 (rev 21)
02:00.0 Network controller: Intel Corporation Device 3165 (rev 81)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

It looks like the penultimate entry is our wireless card.
It is possible to get details of the card you have by using “Intel Corporation Device 3165” as a search term. However, we may be able to get the name of the card by running ….

lspci -vq |grep -i wireless -B 1 -A 4

In my case, this returns :

02:00.0 Network controller: Intel Corporation Wireless 3165 (rev 81)
	Subsystem: Intel Corporation Dual Band Wireless AC 3165
	Flags: bus master, fast devsel, latency 0, IRQ 200
	Memory at 91100000 (64-bit, non-prefetchable) [size=8K]
	Capabilities: &lt;access denied&gt;

Further digging around reveals that, according to Intel, this card is supported in linux starting at Kernel version 4.2.

Now, which version of the Kernel are we actually running ?

Identifying the current kernel version and packages

This is relatively simple. In the Terminal just type :

uname -r

On Mint 17.3, the output is :

3.19.0-32-generic

At this point, we now know that an upgrade to the kernel may well solve our wireless problem. The question now is, which packages do we need to install to effect the upgrade ?

If you look in the repositories, there appear to be at least two distinct versions of kernel packages, the generic and something called low-latency.
In order to be confident of which packages we want to get, it’s probably a good idea to work out what we have now.
This can be achieved by searching the installed packages for the version number of the current kernel.
We can do this in the terminal :

dpkg --list |grep 3.19.0-32 |awk '{print $2}'

In my case, this returned :

linux-headers-3.19.0-32
linux-headers-3.19.0-32-generic
linux-image-3.19.0-32-generic
linux-image-extra-3.19.0.32-generic
linux-kernel-generic

As an alternative, you could use the graphical Synaptic Package Manager.
You can start this from the menu ( Administration/Synaptic Package Manager).

synaptic1

Now we know what we’ve got, the next step is to find the kernel version that we need…

Getting the new kernel packages

It may well be the case that the kernel version you’re after has already been added to the distro’s repository.
To see if this is the case, use Synaptic Package Manager to search as follows :

Start Synaptic Package Manager from the System Menu.
You will be prompted for your password.

Click the Status button and select Not Installed

synaptic_search1

In the Quick filter bar, enter the text : linux-headers-4.2*-generic

synaptic_search2

This should give you a list of any kernel 4.2 versions available in the repository.

If, as I did, you find the version you’re looking for, you need to select the packages that are equivalent to the ones you already have installed on your system.
Incidentally, there are a number of 4.2 kernel versions available, so I decided to go for the latest.
In my case then, I want to install :

  • linux-headers-4.20.0-25
  • linux-headers-4.20.0-25-generic
  • linux-image-4.20.0-25-generic
  • linux-image-extra-4.20.0-25-generic

NOTE – If you don’t find the kernel version you are looking for, you can always download the packages directly using these instructions.

Assuming we have found the version we want, we need to now search for the relevant packages.
In the Quick filter field in Synaptic, change the search string to : linux-*4.2.0-25

To Mark the packages for installation, right-click each one in turn and select Mark for Installation

synaptic_select

Once you’ve selected them all, hit the Apply button.

Once the installation is completed, you need to re-start your computer.

On re-start, you should find that the Grub menu has an entry for Advanced Options.
If you select this, you’ll see that you have a list of kernels to choose to boot into.
This comes in handy if you want to go back to running the previous kernel version.

For now though, we’ll boot into the kernel we’ve just installed.
We can confirm that the installation has been successful, once the machine starts, by opening a Terminal and running :

uname -r

If all has gone to plan, we should now see…

4.2.0-25-generic

Even better in my case, my wireless card has now been recognised.
Opening the systray icon, I can enable wireless and connect to my router.

Backing out of the Kernel Upgrade

If you find that the effects of the kernel upgrade are undesirable, you can always go back to the kernel you started with.
If at all possible, I’d recommend starting Mint using the old kernel before doing this.

If you’re running on the kernel for which you are deleting the packages, you may get some alarming warnings. However, once you re-start, you should be back to your original kernel version.

The command then, is :

sudo apt-get remove linux-headers-4.2* linux-image-4.2*

…where 4.2 is the version of the kernel you want to remove.
Run this and the output looks like this…

The following packages will be REMOVED
  linux-headers-4.2.0-25 linux-headers-4.2.0-25-generic
  linux-image-4.2.0-25-generic linux-image-extra-4.2.0-25-generic
  linux-signed-image-4.2.0-25-generic
0 to upgrade, 0 to newly install, 5 to remove and 7 not to upgrade.
After this operation, 294 MB disk space will be freed.
Do you want to continue? [Y/n]

Once the packages have been removed, the old kernel will be in use on the next re-boot.
After re-starting, you can check this with :

uname -r

Thankfully, these steps proved unnecessary in my case and the kernel upgrade has saved me from hardware cat-astrophe.


Filed under: Linux, Mint Tagged: Acer TravelMate-B116, apt-get remove, dpkg, Intel Corporation Dual Band Wireless AC 3165, kernel upgrade, lspci, synaptic package manager, uname -r

LittleArduinoProjects#018 The Fretboard - a multi-project build status monitor

Paul Gallagher - Sun, 2016-02-07 10:13
(blogarhythm ~ Diablo - Don't Fret)

The Fretboard is a pretty simple Arduino project that visualizes the build status of up to 24 projects with an addressable LED array. The latest incarnation of the project is housed in an old classical guitar … hence the name ;-)

All the code and design details for The Fretboard are open-source and available at fretboard.tardate.com. Feel free to fork or borrow any ideas for your own build. If you build anything similar, I'd love to hear about it.

LittleArduinoProjects#100 Retrogaming on an Arduino/OLED "console"

Paul Gallagher - Sun, 2016-02-07 10:12
(blogarhythm ~ invaders must die - The Prodigy)
Tiny 128x64 monochrome OLED screens are cheap and easy to come by, and quite popular for adding visual display to a microcontroller project.

My first experiments in driving them with raw SPI commands had me feeling distinctly old school, as the last time remember programming a bitmap screen display was probably about 30 years ago!

So while in a retro mood, what better than to attempt an arcade classic? At first I wasn't sure it was going to be possible to make a playable game due to the limited Arduino memory and relative slow screen communication protocol.

But after a few tweaks of the low-level SPI implementation, I was surprised myself at how well it can run. Even had enough clock cycles left to throw in a sound track and effects.

Here's a quick video on YouTube of the latest version. ArdWinVaders! .. in full lo-rez monochrome glory, packed into 14kb and speeding along at 8MHz.



Full source and schematics are in the LittleArduinoProjects collection on Github.

LittleArduinoProjects#174 USB LED Notifiers

Paul Gallagher - Sun, 2016-02-07 10:05
So four of these USB Webmail Notifier devices turned up in a dusty cupboard
in the office.

A quick tear-down shows they contain a super-simple circuit - just a
SONiX Technology SN8P2203SB 8-Bit microcontroller that handles the USB protocol and drives an RGB LED. The SN8P2203SB is an old chip phased out 2010/04/30, superseded by the SN8P2240. They have a supremely primitive USB implementation - basically mimicking a very basic USB 1.0 HID device.

A quick google reveals quite a bit of old code lying around for various projects using devices like this. Most seem to use libusb for convenience - and often 0.1 legacy libusb that. As I'm mainly on MacOSX, the code is not much use since Apple no longer allows claiming of HID devices
and the libusb team decided not to try to get around that.

So to bring things up-to-date, I wrote a simple demo using hidapi
and things all work fine - see the video below.

Now I just need to ponder on good ideas for what to do with these things!

As always, all notes and code are on GitHub.