Ref:

1 - Ebean and the OptimisticLockException 2 - http://www.avaje.org/occ.html

The problem

Recently, I have suffered a wired problem when using Ebean to update item. When I update it by doing following:

SomeEntity entity = SomeEntity.find.byId(siteId);
if (entity == null) {
    // throw exception or send signal
}
entity.setTextAttr("A big text here");
entity.save();

Then when I run this code, the save() statement give me a exception:

javax.persistence.OptimisticLockException: Data has changed

The update code is pretty simple, and so does the definition of SomeEntity.

create table SomeEntity (
  id                        varchar(255) not null,
  textAttr                  TEXT)
;

OptimisticLock

(If you just want a quick solution, refer to the next section)

According to avaje’s document, OptimisticLock is a mechanism to avoid lost of updates when these updates occur in the same time on the some data.

This mechanism is very simple: check if old data has been changed. If the attributes before updated have been changed by other threads, the update will be abort by throwing OptimisticLockException.

There are two ways for Ebean to decide whether the old data has been changed:

  • check all the attributes.(default way)
  • check one special attribute defined by user named as Version Attribute.

The first one is how problem came out. When I performed save(), the Ebean actually mapped it into following sql:

UPDATE some_entity SET text_attr = 'new value' WHERE id = 'item id' AND text_attr = 'old value';

This sql seems innocent, and it works well in the most time. However sometimes it can fail even if on one has changed the data. That is because sometimes database saves data as different value(to optimise the storing I guess). This could happen to double or text type of data. This will cause the where clause above fails, and make this update exceptional.

As an example mentioned by this post:

It can happen that you retrieve in your Java code something like 0.46712538003907 but in fact, in the database, the data stored is something like 0.467125380039068. When updating, the Java value will be included in the WHERE clause and, as it doesn’t match the value stored in the database, the update will fail.

Solution: Version Attribute

By adding Version Attribute to the model, you can specify Ebean only use this column to decide whether old value has been changed.

public class SomeEntity extends Model {
    @Id
    public String Id;

    @Column(columnDefinition = "MEDIUMTEXT")
    @Constraints.Required
    public String TextAttr;

    @Version
    @Column(columnDefinition = "timestamp default '2014-10-06 21:17:06'")
    public Timestamp lastUpdate; // here
}

You don’t need to change anything in the updating code. Now if you perform an update, the sql statement will be:

UPDATE some_entity SET text_attr = 'new value' WHERE last_update = '2014-10-06 21:17:06';

And problem should be solved.


Note that OptimisticLock only happen in READ_COMMITTED Isolation level. So if you use transaction when update somehow, you might just walk around this problem.

A very simple file logger:

import logging

logger = logging.getLogger(__name__) # this will show current module in the log line
logger.setLevel(logging.INFO)

# create a file handler

handler = logging.FileHandler('hello.log')
handler.setLevel(logging.INFO)

# create a logging format

formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)

# add the handlers to the logger

logger.addHandler(handler)

logger.info('Happy logging!')

Logging in a catch

try:
    open('/path/to/does/not/exist', 'rb')
except (SystemExit, KeyboardInterrupt):
    raise
except Exception, e:
    logger.error('Failed to open file', exc_info=True)

Ref:

[1] - Good logging practice in Python

Ref:

[1] - play test 実行時に指定したconfファイルを読み込む

[2] - github - typesafehub/config


Using config file absolutely make your life easier to deploy, debug, and test your applications. Sometimes people want to use different config file to run dev or run prod, Sometimes people just want to use different config to test.

So, I’m going to show how to use multiple config files in sbt project. In this post, I create minimal project by using typesafe’s activator, and I strongly command this tool to create small and clean project with least dependency to train skills.

Normal way to use config in sbt project

Firstly, create simple sbt Scala project. I created it by using activator:

./activator new

And then follow the command wizard to choose minimal-scala as your project type, and fill up the project name.

To load config file, you will need one extra dependency: typesafe’s config.

And following to the build.sbt file under the root directory of the project.

resolvers += "Typesafe Repo" at "http://repo.typesafe.com/typesafe/releases/"

libraryDependencies += "com.typesafe" % "config" % "1.2.1"

Now your build.sbt file should looks like:

name := """project-name"""

version := "1.0"

scalaVersion := "2.11.1"

resolvers += "Typesafe Repo" at "http://repo.typesafe.com/typesafe/releases/"

// Change this to another test framework if you prefer
libraryDependencies += "org.scalatest" %% "scalatest" % "2.1.6" % "test"

libraryDependencies += "com.typesafe" % "config" % "1.2.1"

Ok, by now, you could use a module named ConfigFactory in your project to load your config files.

It’s time to add your own config file.

Add application.config under your src/main/resources, and write some random configs into it. Here is mine:

test.text1="1"
test.text2="2"
test.text3="3"

To check whether this config file could be properly loaded, write a simple program to check it out:

import com.typesafe.config.ConfigFactory

object Hello {
  def main(args: Array[String]): Unit = {
    println("Hello, config!")
    val configText1 = ConfigFactory.load().getConfig("test").getString("text1")
    val configText2 = ConfigFactory.load().getConfig("test").getString("text2")
    val configText3 = ConfigFactory.load().getConfig("test").getString("text3")
    println(configText1)
    println(configText2)
    println(configText3)
  }
}

When run this script, it will give:

Hello, config!
1
2
3

Use config in your test

In most cases, people use a set of different configs in their tests. It would be great if one can just replace some of the configs in the src/main/resources/application.conf and keep the others.

Actually, when run sbt test, sbt will first look for src/main/resources/application.config and then use src/test/resources/application.confg to overwrite configs in the first one.

Note: sbt will do the overwrite even if you not use include "application.config" in the src/test/resources/application.config.

To check that, add new test case under src/test/scala in your project:

import com.typesafe.config.ConfigFactory
import org.scalatest._

class HelloSpec extends FlatSpec with Matchers {
  "Hello" should "have tests" in {
    val configText1 = ConfigFactory.load().getConfig("test").getString("text1")
    val configText2 = ConfigFactory.load().getConfig("test").getString("text2")
    val configText3 = ConfigFactory.load().getConfig("test").getString("text3")
    println(configText1)
    println(configText2)
    println(configText3)

    true should be === true
  }
}

Then add config file for the test as src/test/resources/application.conf.

test.text2="22"

When you run sbt test you will get this as expected:

1
22
3

Use specific config file

Normally, you can specify the config file by use sbt parameter -Dconfig.file or -Dconfig.resource.(for detailed information please refer to this doc)

But in team work, you probably want this to be static, and use the file you have specified whenever and whoever run sbt test. In that case, you need to put a extra in your build.sbt:

fork in Test := true // allow to apply extra setting to Test

javaOptions in Test += "-Dconfig.resource=test.conf" // apply extra setting here

And then, put your test.conf to the src/test/resources/test.conf:

include "application.conf"

test.text3="333"

Note that in this time you will have to use include "application.conf".

And this time when you run sbt test, you will get:

1
22
333

That is because the sequence of config files overwrite each others is:

src/test/resources/test.conf
src/test/resources/application.conf
src/main/resources/application.conf

Note that no matter you like or not, once you include the “application.conf”, the sbt will first try to find src/test/resources/application.conf. If that does not exist, then it will find src/test/resources/application.conf.

Even if you delete the src/test/resources/application.conf, the src/main/resources/application.conf is still in the system.

Here is the result if there is no src/test/resources/application.conf in the project:

1
2
333

Ref:

[1] - Defining 1 logback file per level

[2] - logback - manual

[3] - Akka - Log into files


In my last memo about how to use logback in Akka to log into file system, I showed the way to config logback.xml.

However, recently I find it killing me when I use this log file for bug-shooting, because all the log, including debug, info, warning, error are all in one file. And what I am really looking for is error, which is really rare in the log file.

So, it seems reasonable to place logs of different level into different files. In my case, which would also cover most cases, I will place all the info log line into info.log, and all the error log and above to the error.log.

Since we are place log into different files, we are going to use multiple appenders.

To put info log one file

  <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>log/info.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- Daily rollover with compression -->
            <fileNamePattern>process-log-%d{yyyy-MM-dd}.gz</fileNamePattern>
            <!-- keep 30 days worth of history -->
            <maxHistory>90</maxHistory>
        </rollingPolicy>
        <append>true</append>
        <!-- encoders are assigned the type
             ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>%d,%msg%n</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>INFO</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

The key point is in the filter element. This filter uses LevelFilter to decide which level of log should be logged.

To put error and all levels above to another file

  <appender name="ERROR_FILE" class = "ch.qos.logback.core.rolling.RollingFileAppender">
        <file>
            log/error.log
        </file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover with compression -->
            <fileNamePattern>error-log-%d{yyyy-MM-dd}.gz</fileNamePattern>
            <!-- keep 1 week worth of history -->
            <maxHistory>100</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%date{yyyy-MM-dd HH:mm:ss ZZZZ} %message%n</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
    </appender>

In this example, we use ThresholdFilter to select all the levels above the error to log file.

Then, put two appenders together

This is the tricky part. We ganna use two root logger in the same logback.xml.

<configuration>
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>log/process.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- Daily rollover with compression -->
            <fileNamePattern>process-log-%d{yyyy-MM-dd}.gz</fileNamePattern>
            <!-- keep 30 days worth of history -->
            <maxHistory>90</maxHistory>
        </rollingPolicy>
        <append>true</append>
        <!-- encoders are assigned the type
             ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>%d,%msg%n</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>INFO</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>
    <appender name="ERROR_FILE" class = "ch.qos.logback.core.rolling.RollingFileAppender">
        <file>
            log/error.log
        </file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover with compression -->
            <fileNamePattern>error-log-%d{yyyy-MM-dd}.gz</fileNamePattern>
            <!-- keep 1 week worth of history -->
            <maxHistory>100</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%date{yyyy-MM-dd HH:mm:ss ZZZZ} %message%n</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
    </appender>
    <root level="ERROR">
        <appender-ref ref="ERROR_FILE" />
    </root>
    <root level="INFO">
        <appender-ref ref="FILE" />
    </root>
</configuration>

Finally

Now, in your SomeActor.scala:

class SomeActor extends Actor with ActorLogging {

  implicit val ec = context.dispatcher

  def receive = {
    case x: SomeMsg =>
      log.info("will be logged into info.log")
      log.error("will be logged into error.log")
  }
}

Hope this will help you.

Ref: [1] - Stackoverflow - What’s the canonical way to check for type in python?

Exact type

if type(o) is str:
  print "this is a string"

Instance of type or instance of the subclass

if isinstance(o, str):
  print "this is a string or a instance of subclass of string"

There are more patterns in the reference to deal with more complex usecases.