Development

Do animals have souls?

FeuerThoughts - Thu, 2014-05-29 09:04
OK, first of all, don't tell me your answer to this question. That would make the rest of this post seem a bit rude.

Here is one of the dumbest questions I can ever imagine a person asking, much less answering:

Do animals have souls?

How utterly ridiculous.

No one knows what a soul is. No one knows what it looks like, what it means, whether or not it really exists.

Furthermore, we certainly have no idea - please allow me to repeat that because I think it is so fundamental to accept this as fact: we have no idea at all - of what is going on inside an animal’s head. Clearly, a whole lot is going on, if you take the time to pay attention to animals and think about what it takes to do what they do. But many of the things humans blithely state as fact regarding animals, such as “They don’t know the difference between right and wrong.” is fundamentally meaningless because we simply cannot know what is going on inside another creature’s mind. We just make the assumption that they are really super different from us in all the ways that matter - to us.

We are intelligent, moral, sentient. We are smart and they are dumb, brute animals. We are conscious, we have history, philosophy, nuclear power. What do animals have? Nothing!

Oh really? How do we know what animals have? Or even what “have” means to a butterfly or a snake or a black bear? Again, we really have no idea whatsoever what animals have, what they want, or how they would feel about killing others just to make themselves comfortable (something that we humans do every second of every day).

So we make the most self-serving assumption imaginable. We simply outright declare that other creatures have no souls, are not sentient. They are food or threat or benign, but they are not like us.

We will continue to reject the evidence of our senses, the clear demonstrations of sentience, of complex social structures, in other animals. That way we don’t have to feel bad about enslaving them and killing them. Think for just a moment about how smart pigs are, and then think about pig farms in which tens of thousands of these poor creatures live short miserable lives - brought into this world for the very purpose of slaughtering them for bacon. And then later a dam bursts and an entire town is swamped with pig feces from the refuse lake at the farm. Go, humans, go!

I sure am glad there wasn’t and isn’t a species of creature on this planet that's three times our size, extremely powerful and licking its lips at the prospect of a nicely smoked human torso. 

We do not know what goes on inside a pig’s head, but it sure seems like they can feel and express terror. 

So, yes, humans will keep on keeping on, keep on consuming, reproducing, and assuming. But that doesn't mean we can’t try to recover a shred, a mere shred, of our individual dignity by at least acknowledging what we are doing, and taking at least one step, no matter how small to help heal our planet and our co-inhabitants.

We can start by acknowledging, accepting, that the thing that we believe makes us unique and special among all living things is simply an unknowable assumption we make. It is an arbitrary, self-serving action - and brings into question the very idea that humans can be considered moral creatures. 
Categories: Development

Configure Coherence HotCache

Edwin Biemond - Tue, 2014-02-04 22:29
Coherence can really accelerate and improve your application because it's fast, high available, easy to setup and it's scalable. But when you even use it together with the JCache framework of Java 8 or the new Coherence Adapter in Oracle SOA Suite and OSB 12c it will even be more easier to use Coherence as your main HA Cache.  Before Coherence 12.1.2 when you want to use Coherence together with

REST, SSE or WebSockets on WebLogic 10.3.6

Edwin Biemond - Wed, 2014-01-15 14:10
WebLogic 10.3.6 comes with Jersey1.9 and has no support for Server Side Events or WebSockets. But for one of our projects we are making a HTML5 / AngularJS application, which need to invoke some RESTful services and we also want to use of SSE or WebSockets. Off course we can use WebLogic 12.1.2 but we already have an OSB / SOA Suite WebLogic 10.3.6 environment. So when you want to pimp your

new Puppet 3 Weblogic provisioning module

Edwin Biemond - Sun, 2013-11-24 14:22
The last few weeks I was busy re-writing of my puppet WLS module so it fully supports the power of Puppet 3 (thanks for more than 4000 downloads on puppet forge and all the github downloads). With Puppet 3 we now can use Hiera, Iterations and Lambdas expression. This does not sound like a big change but with Hiera and the new Puppet Language features, I can define big WebLogic Domains without

Creating your own Virtualbox Development Images

Edwin Biemond - Sat, 2013-11-16 15:11
For my Oracle Puppet provisioning development I can't do without these create image tools: Packer and Vagrant in combination with Oracle VirtualBox or VMware.  In this blogpost I will explain what these tools can do for you and how you can make your own images and use puppet as provisioning tool. With Vagrant you can create your own virtual images and it can start puppet or chef to do all the

The road ahead for WebLogic 12c

Edwin Biemond - Sat, 2013-10-19 13:07
Before we can describe all the new features of WebLogic 12.1.3 & 12.1.4 and compare this to the 12.1.2 version we should first take a look at the 10.3.6 version. WebLogic 10.3.6 is still the latest 11g version but Oracle will support 10.3.6 till 2018 and extended support till 2021. So Oracle’s Fusion Apps and we have enough time to migrate to WebLogic 12.1.X. Oracle also promised that the upgrade

Oracle JHeadstart 11.1.1.5 Now Available

JHeadstart - Mon, 2013-09-30 05:15

Oracle JHeadstart 11.1.1.5 is now available. This is a maintenance release for JDeveloper 11g R1. Oracle JHeadstart 12c, a new release that will be compatible with JDeveloper 12c is currently planned to become general available later this calendar year. 

This 11.1.1.5 release contains the following new features:

  • UI Tree Checked for Pending Changes: Prior to this release, pending changes were only detected in the model, after the JSF lifecycle phase "Update Model" was executed to update the underlying ADF model bindings. This implied that for menu item commands the immediate property had to be set to false to detect the last changes made by the user that were not yet sent to the server, otherwise the "Update Model" phase was skipped. However, with immediate=false, the end user could not abandon pages with invalid or incomplete data as validation errors occurred before the menu navigation could take place. So, the choice was to either limit the navigation freedom of the end user, or accept the fact that the pending changes alert was not always shown. With this new release this problem is solved. All menu items have immediate=false allowing complete freedom to navigate away, while the latest changes are also detected because JHeadstart now inspects the UI Components in the page UI tree for any changes. The UI Components are always updated with the latest value, also when immediate is set to true because the UI components are updated in JSF lifecycle phase "Apply Request Values" which is always executed, regardless of the setting of the immediate property. The inspection of the UI tree is performed in a new managed bean class PendingChangesVisitCallBack. This class is generated as managed bean in JhsCommon-beans.xml. If for some reason you want to keep the old behavior, and not inspect the UI tree for pending changes, you can add the following property to the ApplicationDefinition.xml file:
      checkUITreePendingChanges="false"
    
    Note that you need to set this property directly in the xml file, it is not visible in the JHeadstart Application Definition editor. If this property is set to false, the PendingChangesVisitCallBack bean is not generated and the menu item commands are generated again with immediate=false.
  • Ability to override generated item properties: The item-level property "Additional Properties" has been renamed to "Additional / Overriding Properties" can now be used as well to override standard generated properties. If you specify a property in "Additional / Overriding Properties" then this property always 'wins', it will override a property by the same name that was generated by the same item template. This signifcantly reduces the need for custom item templates.
  • Support for Skyros Skin: If you wan to use the new Skyros skin introduced in JDeveloper 11.1.1.7, you should change the JHS_PAGE_TEMPLATE setting at application level from default/misc/file/jhsPageTemplate.vm to default/misc/file/jhsPageTemplateSkyros.vm. This new template is optmized for the Skyros skin, and also uses the new af:panelGridLayout component.

See the JHeadstart 11.1.1.5 Release Notes for a list of bugs fixed in this release.

See the support matrix for supported JDeveloper releases.
Customers who own a JHeadstart supplement option license can download it from the Consulting Supplement Option portal. 

For a complete list of all JHeadstart features, use this link. For detailed information on all the features, check out the comprehensive JHeadstart Developer's guide. Want to try JHeadstart yourself? Do the tutorial using the JHeadstart evaluation version that you can install using Help -> Check for Updates functionality in JDeveloper.


Categories: Development

Custom Jersey WADL generation

Edwin Biemond - Thu, 2013-08-22 14:46
I had a situation where the auto generated WADL did not match with my Rest services. The first difference was that the response is presented as an object instead of a collection of objects and the second one is that it could not handle JSONWithPadding as response.  Because I use this WADL in my Rest client generation, I need to fix these issues. Lucky for me, Jersey JAX-RS allows us to provide

Coherence 12.1.2 Rest application build with OEPE

Edwin Biemond - Sun, 2013-08-11 16:20
With WebLogic 12.1.2 Oracle also released a new version of Coherence and OEPE. The 12.1.2 release contains many new Coherence features like WebLogic Managed Coherence Servers and Coherence Grid Archive ( GAR ) which can be included in an normal EAR. Coherence also has some nice new REST features like direct & named queries,  Custom Query engines and new Security options. Plus with OEPE you can

JAX-WS SOAP over JMS

Edwin Biemond - Thu, 2013-08-01 07:39
With WebLogic 12.1.2 Oracle now also supports JAX-WS SOAP over JMS. Before 12.1.2 we had to use JAX-RPC and without any JDeveloper support. We need to use ANT to generate all the web service code. See this blogpost for all the details. In this blogpost I will show you all the necessary JDeveloper steps to create a SOAP over JMS JAX-WS Web Service  ( Bottom up approach) and generate a Web

Maven support in WebLogic & JDeveloper 12.1.2

Edwin Biemond - Fri, 2013-07-26 09:32
In the 12.1.2 release of JDeveloper and WebLogic, Oracle really improved the support for Maven as build and provisioning tool. Oracle did this on multiple levels: an Utility to synchronize all the Oracle Middleware jars to a local ( .m2/repository) or a shared repository like nexus or artifactory ojmake maven plugin for just building JDeveloper projects. Updated its Weblogic plugin for

JDeveloper 12.1.2 EJB & Java Service Facade Datacontrol

Edwin Biemond - Thu, 2013-07-25 07:13
With JDeveloper 12c (12.1.2)  Oracle added some great updates to the ADF EJB / Java Facade Datacontrol. Oracle already added the Java Facade and Named Criteria support in its previous versions of JDeveloper but now also added the so wanted List of Values feature, just like we have in ADF BC.   In the the next part I will show you all the new features and some LOV bugs ( not everything is

Custom OSB Reporting Provider

Edwin Biemond - Tue, 2013-06-11 15:53
With the OSB Report Action we can add some tracing and logging to an OSB Proxy, this works OK especially when you add some Report keys for single Proxy projects but when you have projects with many Proxies who are invoking other JMS or Local Proxies than the default reporting tables (WLI_QS_REPORT_DATA, WLI_QS_REPORT_ATTRIBUTE ) in the SOA Suite soainfra schema is not so handy. I want to

Build and Deploy OSB projects with Maven

Edwin Biemond - Tue, 2013-04-30 12:31
2 years ago I already did the same with ANT and now I migrated these scripts to Maven. These Maven poms can still do the same like my ANT scripts. Build and deploy an OSB OEPE workplace Build one OSB project. Export OSB projects from an OSB server and generate a customization plan. Here you can find my code https://github.com/biemond/soa_tools/tree/master/maven_osb_ps5 or the PS6 version https:

Offline Oracle Service Bus Configuration export

Edwin Biemond - Tue, 2013-04-30 12:24
With Oracle Service Bus PS6 or 11.1.1.7 we finally have an new offline build tool with does not require Eclipse (OEPE).  With this OSB configjar tool ( located in the OSB home /tools/configjar/ folder ) you can make OSB export sbconfig jar based on 1 or more OSB projects or even with more then one OSB Configuration projects. Plus have total control what to include or exclude. In this blogpost I

Token Configurations in Oracle SOA Suite PS6 ( 11.1.1.7.0)

Edwin Biemond - Sat, 2013-04-20 13:11
Oracle Soa Suite PatchSet 6 or 11.1.1.7.0  now has support for Token configurations which really can help administrators in configuring or overriding external Web Service Reference parameters like the protocol ( http, oramds or https ),  the remote host etc. And more important if you do it right you can kick out the all those Soa Suite deployment plans :-) Important to know this only works on

Testing Activiti BPM on WebLogic 12c

Edwin Biemond - Fri, 2013-03-29 15:33
Activiti is a great open source workflow + BPM platform, which you can use in your own java application (embedded) or test it in the provided Rest or Web demo applications. Activiti also provides  an Eclipse designer plugin which you can use to create your own BPMN 2.0 definitions and export this to the Activiti applications. In blogpost I will show you the steps how to get this working on the

7 things that can go wrong with Ruby 1.9 string encodings

Raimonds Simanovskis - Sun, 2013-03-10 17:00

Good news, I am back in blogging :) In recent years I have spent my time primarily on eazyBI business intelligence application development where I use JRuby, Ruby on Rails, mondrian-olap and many other technologies and libraries and have gathered new experience that I wanted to share with others.

Recently I did eazyBI migration from JRuby 1.6.8 to latest JRuby 1.7.3 version as well as finally migrated from Ruby 1.8 mode to Ruby 1.9 mode. Initial migration was not so difficult and was done in one day (thanks to unit tests which caught majority of differences between Ruby 1.8 and 1.9 syntax and behavior).

But then when I thought that everything is working fine I got quite many issues related to Ruby 1.9 string encodings which unfortunately were not identified by test suite and also not by my initial manual tests. Therefore I wanted to share these issues which might help you to avoid these issues in your Ruby 1.9 applications.

If you are new to Ruby 1.9 string encodings then at first read, for example, tutorials about Ruby 1.9 String and Ruby 1.9 Three Default Encodings, as well as Ruby 1.9 Encodings: A Primer and the Solution for Rails is useful.

1. Encoding header in source files

I will start with the easy one - if you use any Unicode characters in your Ruby source files then you need to add

# encoding: utf-8

magic comment line in the beginning of your source file. This was easy as it was caught by unit tests :)

2. Nokogiri XML generation

The next issues were with XML generation using Nokogiri gem when XML contains Unicode characters. For example,

require "nokogiri"
doc = Nokogiri::XML::Builder.new do |xml|
  xml.dummy :name => "āčē"
end
puts doc.to_xml

will give the following result when using MRI 1.9:

<?xml version="1.0"?>
<dummy name="&#x101;&#x10D;&#x113;"/>

which might not be what you expect if you would like to use UTF-8 encoding also for Unicode characters in generated XML file. If you execute the same ruby code in JRuby 1.7.3 in default Ruby 1.9 mode then you get:

<?xml version="1.0"?>
<dummy name="āčē"/>

which seems OK. But actually it is not OK if you look at generated string encoding:

doc.to_xml.encoding # => #<Encoding:US-ASCII>
doc.to_xml.inspect  # => "<?xml version=\"1.0\"?>\n<dummy name=\"\xC4\x81\xC4\x8D\xC4\x93\"/>\n"

In case of JRuby you see that doc.to_xml encoding is US-ASCII (which is 7 bit encoding) but actual content is using UTF-8 8-bit encoded characters. As a result you might get ArgumentError: invalid byte sequence in US-ASCII exceptions later in your code.

Therefore it is better to tell Nokogiri explicitly that you would like to use UTF-8 encoding in generated XML:

doc = Nokogiri::XML::Builder.new(:encoding => "UTF-8") do |xml|
  xml.dummy :name => "āčē"
end
doc.to_xml.encoding # => #<Encoding:UTF-8>
puts doc.to_xml
<?xml version="1.0" encoding="UTF-8"?>
<dummy name="āčē"/>
3. CSV parsing

If you do CSV file parsing in your application then the first thing you have to do is to replace FasterCSV gem (that you probably used in Ruby 1.8 application) with standard Ruby 1.9 CSV library.

If you process user uploaded CSV files then typical problem is that even if you ask to upload files in UTF-8 encoding then quite often you will get files in different encodings (as Excel is quite bad at producing UTF-8 encoded CSV files).

If you used FasterCSV library with non-UTF-8 encoded strings then you get ugly result but nothing will blow up:

FasterCSV.parse "\xE2"
# => [["\342"]]

If you do the same in Ruby 1.9 with CSV library then you will get ArgumentError exception.

CSV.parse "\xE2"
# => ArgumentError: invalid byte sequence in UTF-8

It means that now you need to rescue and handle ArgumentError exceptions in all places where you try to parse user uploaded CSV files to be able to show user friendly error messages.

The problem with standard CSV library is that it is not handling ArgumentError exceptions and is not wrapping them in MalformedCSVError exception with information in which line this error happened (as it is done with other CSV format exceptions) which makes debugging very hard. Therefore I also "monkey patched" CSV#shift method to add ArgumentError exception handling.

4. YAML serialized columns

ActiveRecord has standard way how to serialize more complex data types (like Array or Hash) in database text column. You use serialize method to declare serializable attributes in your ActiveRecord model class definition. By default YAML format (using YAML.dump method for serialization) is used to serialize Ruby object to text that is stored in database.

But you can get big problems if your data contains string with Unicode characters as YAML implementation significantly changed between Ruby 1.8 and 1.9 versions:

  • Ruby 1.8 used so-called Syck library
  • JRuby in 1.8 mode used Java based implementation that tried to ack like Syck
  • Ruby 1.9 and JRuby in 1.9 mode use new Psych library

Lets try to see results what happens with YAML serialization of simple Hash with string value which contains Unicode characters.

On MRI 1.8:

YAML.dump({:name => "ace āčē"})
# => "--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n"

On JRuby 1.6.8 in Ruby 1.8 mode:

YAML.dump({:name => "ace āčē"})
# => "--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n"

On MRI 1.9 or JRuby 1.7.3 in Ruby 1.9 mode:

YAML.dump({:name => "ace āčē"})
# => "---\n:name: ace āčē\n"

So as we see all results are different. But now lets see what happens after we migrated our Rails application from Ruby 1.8 to Ruby 1.9. All our data in database is serialized using old YAML implementations but now when loaded in our application they are deserialized back using new Ruby 1.9 YAML implementation.

When using MRI 1.9:

YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")
# => {:name=>"ace \xC4\x81\xC4\x8D\xC4\x93"}
YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")[:name].encoding
# => #<Encoding:ASCII-8BIT>

So the string that we get back from database is no more in UTF-8 encoding but in ASCII-8BIT encoding and when we will try to concatenate it with UTF-8 encoded strings we will get Encoding::CompatibilityError: incompatible character encodings: ASCII-8BIT and UTF-8 exceptions.

When using JRuby 1.7.3 in Ruby 1.9 mode then result again will be different:

YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")
# => {:name=>"ace Ä\u0081Ä\u008DÄ\u0093"}
YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")[:name].encoding
# => #<Encoding:UTF-8>

So now result string has UTF-8 encoding but the actual string is damaged. It means that we will not even get exceptions when concatenating result with other UTF-8 strings, we will just notice some strange garbage instead of Unicode characters.

The problem is that there is no good solution how to convert your database data from old YAML serialization to new one. In MRI 1.9 at least it is possible to switch back YAML to old Syck implementation but in JRuby 1.7 when using Ruby 1.9 mode it is not possible to switch to old Syck implementation.

Current workaround that I did is that I made modified serialization class that I used in all model class definitions (this works in Rails 3.2 and maybe in earlier Rails 3.x versions as well):

serialize :some_column, YAMLColumn.new

YAMLColumn implementation is a copy from original ActiveRecord::Coders::YAMLColumn implementation. I modified load method to the following:

def load(yaml)
  return object_class.new if object_class != Object && yaml.nil?
  return yaml unless yaml.is_a?(String) && yaml =~ /^---/
  begin
    # if yaml sting contains old Syck-style encoded UTF-8 characters
    # then replace them with corresponding UTF-8 characters
    # FIXME: is there better alternative to eval?
    if yaml =~ /\\x[0-9A-F]{2}/
      yaml = yaml.gsub(/(\\x[0-9A-F]{2})+/){|m| eval "\"#{m}\""}.force_encoding("UTF-8")
    end
    obj = YAML.load(yaml)

    unless obj.is_a?(object_class) || obj.nil?
      raise SerializationTypeMismatch,
        "Attribute was supposed to be a #{object_class}, but was a #{obj.class}"
    end
    obj ||= object_class.new if object_class != Object

    obj
  rescue *RESCUE_ERRORS
    yaml
  end
end

Currently this patched version will work on JRuby where just non-ASCII characters are replaced by \xNN style fragments (byte with hex code NN). When loading existing data from database we check if it has any such \xNN fragment and if yes then these fragments are replaced with corresponding UTF-8 encoded characters. If anyone has better suggestion for implementation without using eval then please let me know in comments :)

If you need to create something similar for MRI then you would probably need to search if database text contains !binary | fragment and if yes then somehow transform it to corresponding UTF-8 string. Anyone has some working example for this?

5. Sending binary data with default UTF-8 encoding

I am using spreadsheet gem to generate dynamic Excel export files. The following code was used to get generated spreadsheet as String:

book = Spreadsheet::Workbook.new
# ... generate spreadsheet ...
buffer = StringIO.new
book.write buffer
buffer.seek(0)
buffer.read

And then this string was sent back to browser using controller send_data method.

The problem was that in Ruby 1.9 mode by default StringIO will generate strings with UTF-8 encoding. But Excel format is binary format and as a result send_data failed with exceptions that UTF-8 encoded string contains non-UTF-8 byte sequences.

The fix was to set StringIO buffer encoding to ASCII-8BIT (or you can use alias BINARY):

buffer = StringIO.new
buffer.set_encoding('ASCII-8BIT')

So you need to remember that in all places where you handle binary data you cannot use strings with default UTF-8 encoding but need to specify ASCII-8BIT encoding.

6. JRuby Java file.encoding property

Last two issues were JRuby and Java specific. Java has system property file.encoding which is not related just to file encoding but determines default character set and string encoding in many places.

If you do not specify file.encoding explicitly then Java VM on startup will try to determine its default value based on host operating system "locale". On Linux it might be that it will be set to UTF-8, on Mac OS X by default it will be MacRoman, on Windows it will depend on Windows default locale setting (which will not be UTF-8). Therefore it is always better to set explicitly file.encoding property for Java applications (e.g. using -Dfile.encoding=UTF-8 command line flag).

file.encoding will determine which default character set java.nio.charset.Charset.defaultCharset() method call will return. And even if you change file.encoding property during runtime it will not change java.nio.charset.Charset.defaultCharset() result which is cached during startup.

JRuby uses java.nio.charset.Charset.defaultCharset() in very many places to get default system encoding and uses it in many places when constructing Ruby strings. If java.nio.charset.Charset.defaultCharset() will not return UTF-8 character set then it might result in problems when using Ruby strings with UTF-8 encoding. Therefore in JRuby startup scripts (jruby, jirb and others) file.encoding property is always set to UTF-8.

So if you start your JRuby application in standard way using jruby script then you should have file.encoding set to UTF-8. You can check it in your application using ENV_JAVA['file.encoding'].

But if you start your JRuby application in non-standard way (e.g. you have JRuby based plugin for some other Java application) then you might not have file.encoding set to UTF-8 and then you need to worry about it :)

7. JRuby Java string to Ruby string conversion

I got file.encoding related issue in eazyBI reports and charts plugin for JIRA. In this case eazyBI plugin is OSGi based plugin for JIRA issue tracking system and JRuby is running as a scripting container inside OSGi bundle.

JIRA startup scripts do not specify file.encoding default value and as a result it typically is set to operating system default value. For example, on my Windows test environment it is set to Windows-1252 character set.

If you call Java methods of Java objects from JRuby then it will automatically convert java.lang.String objects to Ruby String objects but Ruby strings in this case will use encoding based on java.nio.charset.Charset.defaultCharset(). So even when Java string (which internally uses UTF-16 character set for all strings) can contain any Unicode character it will be returned to Ruby not as string with UTF-8 encoding but in my case will return with Windows-1252 encoding. As a result all Unicode characters which are not in this Windows-1252 character set will be lost.

And this is very bad because everywhere else in JIRA it does not use java.nio.charset.Charset.defaultCharset() and can handle and store all Unicode characters even when file.encoding is not set to UTF-8.

Therefore I finally managed to create a workaround which forces that all Java strings are converted to Ruby strings using UTF-8 encoding.

I created custom Java string converter based on standard one in org.jruby.javasupport.JavaUtil class:

package com.eazybi.jira.plugins;

import org.jruby.javasupport.JavaUtil;
import org.jruby.Ruby;
import org.jruby.RubyString;
import org.jruby.runtime.builtin.IRubyObject;

public class RailsPluginJavaUtil {
    public static final JavaUtil.JavaConverter JAVA_STRING_CONVERTER = new JavaUtil.JavaConverter(String.class) {
        public IRubyObject convert(Ruby runtime, Object object) {
            if (object == null) return runtime.getNil();
            // PATCH: always convert Java string to Ruby string with UTF-8 encoding
            // return RubyString.newString(runtime, (String)object);
            return RubyString.newUnicodeString(runtime, (String)object);
        }
        public IRubyObject get(Ruby runtime, Object array, int i) {
            return convert(runtime, ((String[]) array)[i]);
        }
        public void set(Ruby runtime, Object array, int i, IRubyObject value) {
            ((String[])array)[i] = (String)value.toJava(String.class);
        }
    };
}

Then in my plugin initialization Ruby code I dynamically replaced standard Java string converter to my customized converter:

java_converters_field = org.jruby.javasupport.JavaUtil.java_class.declared_field("JAVA_CONVERTERS")
java_converters_field.accessible = true
java_converters = java_converters_field.static_value.to_java
java_converters.put(java.lang.String.java_class, com.eazybi.jira.plugins.RailsPluginJavaUtil::JAVA_STRING_CONVERTER)

And as a result now all Java strings that were returned by Java methods were converted to Ruby strings using UTF-8 encoding and not using encoding from file.encoding Java property.

Final thoughts

My main conclusions from solving all these string encoding issues are the following:

  • Use UTF-8 encoding as much as possible. Handling conversions between different encodings will be much more harder than you will expect.
  • Use example strings with Unicode characters in your tests. I didn't identify all these issues initially when running tests after migration because not all tests were using example strings with Unicode characters. So next time instead of using "dummy" string in your test use "dummy āčē" everywhere :)

And please let me know (in comments) if you have better or alternative solutions for the issues that I described here.

Categories: Development

Add Code w/ Syntax Highlighting to PowerPoint

Tyler Muth - Wed, 2013-02-06 09:12
…or any media that supports rich text or HTML. I often include code-snippets or queries in technical presentations. I find them much more visually appealing and easier to read if they include syntax highlighting. If you’re using Notepad++ on Windows, it’s trivial. Simply go to the “Plugins” menu > “NppExport” > “Copy all formats to […]
Categories: DBA Blogs, Development

“Refresh” a CSV File

Tyler Muth - Tue, 2013-02-05 09:22
In a previous post I discussed a PDF reader that allows you to refresh the PDF after changes are made. This is the same concept except it’s for CSV files. I’ve been using Perl to convert sqlplus spool output to CSV which is much easier to read into R. As I’m updating my code and […]
Categories: DBA Blogs, Development

Pages

Subscribe to Oracle FAQ aggregator - Development