I sense a series on ActiveRecord optimizations upon us. Today lets look at how we can test the insert of an ActiveRecord object into our database.

 
require "benchmark"
 
Benchmark.bm(7) do |x| 
  x.report { (1..10000).each_slice(100) { |a| a.each_with_index {|p, idx| Product.create!(title: "Shoe-#{idx}", store_id: 1) }}}
  x.report { (1..10000).each_with_index { |p, idx| Product.create!(title: "Shoe-#{idx}", store_id: 1) } }
  x.report { (1..10000).each { |i| Product.create!(title: "Shoe-#{i}", store_id: 1) } }
end
 
#=> [
 
#<Benchmark::Tms:0x007fa71863ed60 @label="", @real=24.679326, @cstime=0.0, @cutime=0.0, @stime=4.780000000000001, @utime=16.759999999999998, @total=21.54>, 
 
#<Benchmark::Tms:0x007fa719869418 @label="", @real=24.60898, @cstime=0.0, @cutime=0.0, @stime=4.76, @utime=16.750000000000007, @total=21.510000000000005>, 
 
#<Benchmark::Tms:0x007fa7162f8658 @label="", @real=24.802445, @cstime=0.0, @cutime=0.0, @stime=4.75, @utime=16.92, @total=21.67>
 
]

Our benchmarks around the Array show us the looping isn’t the issue (reducing the loops doesn’t have a noticeable effect), moreover ActiveRecord#create() around 10k records starts to slow the system down considerably.

This is the first step in debugging our application for breakdowns in code around updating our DB with an ORM.

Let’s add two more tests for good measure:

 x.report { 10000.times { |n| Product.create!(title: "Shoe-#{n}", store_id: 1) } }
 x.report { Product.create!(title: "Shoe-#{1}", store_id: 1)  }
 
 
 
#<Benchmark::Tms:0x007fa719b81920 @label="", @real=24.808788, @cstime=0.0, @cutime=0.0, @stime=4.789999999999999, @utime=16.849999999999994, @total=21.639999999999993>
 
#<Benchmark::Tms:0x007fa719bb0a40 @label="", @real=0.349937, @cstime=0.0, @cutime=0.0, @stime=0.0800000000000054, @utime=0.010000000000019327, @total=0.09000000000002473>

With a real time of .34 seconds for an insert we can see ActiveRecord is the culprit, not the looping.

:)

Leave a Reply

Your email address will not be published. Required fields are marked *