Symbol#to_proc is slow... is it slow enough to matter?

November 18, 2008 · 3 min read

It’s common knowledge that the Symbol#to_proc trick is slower than writing out a block by hand. But just how much slower? I put together some benchmarks to find out.

Environment

These tests were run on Ruby 1.8.6-pl111 and Rails 2.1.

Benchmarking

Say you have a database of 1,000 items that you need to iterate over. Let’s set aside the fact that displaying 1,000 items probably means you have usability problems, and just roll with it.

1_000.times { |n| Bar.create :name => "bar-#{n}" }
bars = Bar.find(:all)

Here’s how the two approaches compare over 1,000 ActiveRecord instances:

Benchmark.measure { bars.map(&:name) }.real
#=> 0.00645709037780762

Benchmark.measure { bars.map { |b| b.name } }.real
#=> 0.00141692161560059

That’s a horrific-sounding increase: to_proc takes more than 350% longer than the plain block. But let’s be realistic – over 1,000 records, the total time is 0.0065 seconds. Not exactly something to lose sleep over.

What about 1,000,000 rows? We already have 1,000, so let’s top it up:

(1_000_000 - 1_000).times { Bar.create :name => Time.now.to_f.to_s }
bars = Bar.find(:all)

That gives us a million rows. By this point your database is probably questioning your life choices. Presenting a million rows to a user is a bit of an edge case, but here’s how long it takes:

Benchmark.measure { bars.map(&:name) }.real
#=> 6.25304508209229

Benchmark.measure { bars.map { |b| b.name } }.real
#=> 1.38965106010437

Almost 5 extra seconds over a million rows. Five seconds is a real hit, sure – but how long will your application be running before you hit a million rows in a single table and need to iterate over every last one of them?

Don’t optimise prematurely. By the time to_proc becomes your bottleneck, you’ll have hit many other problems first:

Benchmark.measure { Bar.find(:all) }.real
#=> 406.738657951355

Worry about those first.

Run it yourself

It’s been a long time since I ran the original benchmark. Here’s some copy-paste code to run a similar one yourself:

require 'benchmark'
puts "PLATFORM = #{RUBY_PLATFORM}, VERSION = #{RUBY_VERSION}"
Benchmark.bmbm do |x|
  x.report("to_proc") { 10_000_000.times &:to_s }
  x.report("literal 1") { 10_000_000.times { |n| n.to_s }}
  x.report("literal 2") { n = lambda { |i| i.to_s }; 10_000_000.times &n }
end

Here are the results from my MacBook Air on Ruby 2.1.2 – and they tell a rather interesting story:

    Rehearsal ---------------------------------------------
    to_proc     1.890000   0.010000   1.900000 (  1.909775)
    literal 1   2.340000   0.000000   2.340000 (  2.350912)
    literal 2   2.270000   0.000000   2.270000 (  2.274322)
    ------------------------------------ total: 6.510000sec

    user     system      total        real
    to_proc     1.810000   0.000000   1.810000 (  1.808921)
    literal 1   2.090000   0.000000   2.090000 (  2.092189)
    literal 2   2.060000   0.010000   2.070000 (  2.061436)

These posts are LLM-aided. Backbone, original writing, and structure by Craig. Research and editing by Craig + LLM. Proof-reading by Craig.