In other words, you just use def get_payment_types if
get_payment_types is to be performed on a specific instance of that
class, and def self.get_payment_types if it is just a generic function
for that class...
I guess what I am confused about is this:
Dave Thomas talks about self as a special variable used by Ruby to
maintain a reference to the context of where the interpreter is
operating and any given point, so it knows where to find a particular
method. So why does Ruby not seem to know the context of
"get_payment_types" without the self. Here is a slight variation on the
puts "Master Self is set to: " + self.to_s + " 1"
payment_types = ["Check", "Credit Card", "Purchase Order"]
# must be defined after the method. Can't be defined in a method
PAYMENT_TYPES = get_payment_types
puts "Master Self is set to: " + self.to_s + " 2"
puts "Master Self is set to: " + self.to_s + " 3"
p(PaymentType::PAYMENT_TYPES << self.to_s)
puts "Master Self is set to: " + self.to_s + " 4"
This is an attempt to show the context contained in "self" at various
You will see it change as the interpreter executes the code
sequentially. As it executes in the class, Ruby understands the context.
So, it knows it is in the class, so why can't I call on a method without
the self reference. When a call comes in from a browser, Rails starts
executing inside a controller class, calling it's methods who in turn
call on each other within the same class, all of which typically won't
have the self reference. I am trying to get at some fundamental issue of
object-oriented programming that I am don't understand.