«EVALUATIONS OF DELAYED REINFORCEMENT IN CHILDREN WITH DEVELOPMENTAL DISABILITIES By JOLENE RACHEL SY A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL ...»
There has been some debate about the mechanisms responsible for response acquisition and maintenance under delayed reinforcement (see Lattal, 2010, for a review). For example, it has been suggested that responding can be maintained when reinforcement is delayed due to “superstitious” responses that bridge the delay to reinforcement (Keller & Schoenfeld, 1950). Indeed, Ferster (1953) found that each subject consistently engaged in a particular response (e.g., turning in a circle with the head stretched high) during a 60-s delay interval. These responses then continued to occur throughout an approximately 100-hr experimental period. In the present investigation, all subjects engaged in alternative responses during the delay interval.
These responses varied over time and across subjects, ranging from stereotypy (Walden) to drinking water (Vlade and Amira). The lack of consistency suggests that the efficacy of delayed reinforcement was not mitigated by “superstitious” responses occurring during the delay. Instead, these responses could have either been adjunctive responses induced by the reinforcement schedule (e.g., water consumption, Falk, 1961) or responses that were briefly adventitiously reinforced. Reeve et al. (1993) noted that responses that are adventitiously reinforced by temporally contiguous reinforcement may vary and fade over time as the overall contingency (reinforcement for the target response) begins to control behavior.
Assuming that the procedures used in the current investigation accurately estimate response rates under delayed reinforcement, the results of this investigation may have implications for skill acquisition and maintenance of problem behavior. It may be the case that teachers need not provide reinforcers immediately to maintain an appropriate response, even if problem behavior occurs following those appropriate responses. However, it may also be the case that the delivery of reinforcers for appropriate responses that occur some time after problem behavior may actually serve to maintain problem behavior. However, it is difficult to make a jump from the systematic controlled arrangement set up in the current experiments to the more complex contingencies operating in the natural environment. It is hoped that the current series of studies will set the stage for more research on delayed reinforcement in complex environments.
Bilodeau, E. A., & Bilodeau, I. M. (1958). Variation of temporal intervals among critical events in five studies of knowledge of results. Journal of Experimental Psychology, 55, 603-612.
Brackbill, Y., & Kappy, M. S., (1962). Delay of reinforcement and retention. Journal of Comparative and Physiological Psychology, 55, 14-18.
Catania, A. C. (1971). Reinforcement schedules: The role of responses preceding the one that produces the reinforcer. Journal of the Experimental Analysis of Behavior, 15, 271-287.
Catania, A. C. (2007). Learning (4th ed.). New York, Sloan Publishing.
Critchfield, T. S., & Lattal, K. A. (1993). Acquisition of a spatially defined operant with delayed reinforcement. Journal of the Experimental Analysis of Behavior, 59, 373Denny, M. A., Allard, M., Hall, E., & Rokeach, M. (1960). Supplementary report: Delay of knowledge of results, knowledge of task, and intertrial interval. Journal of Experimental Psychology, 60, 327.
Dews, P. B. (1960). Free-operant behavior under conditions of delayed reinforcement. I.
CRF-type schedules. Journal of Experimental Psychology, 45, 27-45.
Dickinson, A., Watt, A., & Griffiths, W. J. H. (1992). Free-operant acquisition with delayed reinforcement. The Quarterly Journal of Experimental Psychology, 45, 241-258.
Dixon, M. R., Horner, M. J., & Guercio, J. (2003). Self-control and the preference for delayed reinforcement: An example in brain injury. Journal of Applied Behavior Analysis, 36, 371-374.
Erickson, M. T., & Lipsitt, L. P. (1960). Effects of delayed reward on simultaneous and successive discrimination learning in children. Journal of Comparative & Physiological Psychology, 53, 256-260.
Escobar, R., & Bruner, C. A. (2007). Response induction during the acquisition and maintenance of lever pressing with delayed reinforcement. Journal of the Experimental Analysis of Behavior, 88, 29-49.
Falk, J. L. (1961). Production of polydipsia in normal rats by an intermittent food schedule. Science, 133, 195-196.
Ferster, C. B. (1953). Sustained behavior under delayed reinforcement. Journal of Experimental Psychology, 45, 27-45.
Ferster, C. B., & Skinner, B.F. (1957) Schedules of reinforcement. New York: AppletonCentury-Crofts.
Fleshler, M., & Hoffman, H. S. (1962). A progression for generating variable-interval schedules. Journal of the Experimental Analysis of Behavior, 5, 529-530.
Fowler, H., & Trapold, M. A. (1962). Escape performance as a function of delay of reinforcement. Journal of Experimental Psychology, 63, 464-467.
Gleeson, S., & Lattal, K. A. (1987). Response-reinforcer relations and the maintenance of behavior. Journal of the Experimental Analysis of Behavior, 48, 383-393.
Grindle, C. F., & Remington, B. (2002). Discrete-trial training for autistic children when reward is delayed: A comparison of conditioned cue value and response marking.
Journal of Applied Behavior Analysis, 35, 187-190.
Herrnstein, R. J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243-266.
Hockman, C. H., & Lipsitt, L. P. (1961). Delay-of-reward gradients in discrimination learning with children for two levels of difficulty. Journal of Comparative Physiological Psychology, 54, 24-27.
Keely, J., Feola, T., & Lattal, K. A. (2007). Contingency tracking during unsignaled
delayed reinforcement. Journal of the Experimental Analysis of Behavior, 88, 229Keller, F. S., & Schoenfeld, W. N. (1950). Principles of Psychology. New York:
Lattal, K. A. (2010). Delayed reinforcement in operant behavior. Journal of the Experimental Analysis of Behavior, 93, 129-139.
Lattal, K. A., & Gleeson, S. (1990). Response acquisition with delayed reinforcement.
Journal of Experimental Psychology, 16, 27-39.
LeSage, M. G., Byrne, T., & Poling, A. (1996). Effects of d-amphetamine on response acquisition with immediate and delayed reinforcement. Journal of the Experimental Analysis of Behavior, 66, 349-367.
Mace, F. C., & Critchfield, T. S. (2010). Translational research in behavior analysis:
Historical traditions and imperative for the future. Journal of the Experimental Analysis of Behavior, 93, 293-312.
Mazur, J. E. (1987). An adjusting procedure for studying delays of reinforcement. In M.
L. Commons, J. E. Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quantitative analyses of behavior, Vol 5: The effects of delays and of intervening events on reinforcement value (pp. 55-73). Hillsdale, N: Erlbaum.
Meichenbaum, D. H., Bowers, K. S., & Ross, R. R. (1968). Modification of classroom behavior of institutionalized female adolescent offenders. Behavior Research & Therapy, 6, 343-353.
Millar, W. S. (1990). Span of integration for delayed-reward contingency learning in 6- to 8-month-old infants. Annals of the New York Academy of Sciences, 608, 239-266.
Millar, W. S., & Watson, J. S. (1979). The effect of delayed feedback on infant learning reexamined. Child Development, 50, 747-751.
Miltenberger, R. (2008). Behavior modification (4th ed.). California, ThomsonWadsworth.
Morse, W. H., & Skinner, B. F. (1957). A second type of superstition in the pigeon. The American Journal of Psychology, 70, 308-311.
Okouchi, H. (2009). Response acquisition by humans with delayed reinforcement.
Journal of the Experimental Analysis of Behavior, 91, 377-390.
Reeve, L., Reeve, K. F., & Poulson, C. L. (1993). A parametric variation of delayed reinforcement in infants. Journal of the Experimental Analysis of Behavior, 60, 515-527.
Reeve, L., Reeve, K. F., Brown, A. K., Brown, J. L., & Poulson, C. L. (1992). Effects of delayed reinforcement on infant vocalization rate. Journal of the Experimental Analysis of Behavior, 58, 1-8.
Reilly, M. P., & and Lattal, K. A. (2004). Within-session delay-of-reinforcement gradients. Journal of the Experimental Analysis of Behavior, 82, 21-35.
Renner, K. E. (1964). Delay of reinforcement: A historical review. Psychological Bulletin, 61, 341-361.
Rieber, M. (1961). The effect of CS presence during delay of reward on the speed of an instrumental response. Journal of Experimental Psychology, 61, 290-294.
Saltzman, I. J., Kanfer, F. H., & Greenspoon, J. (1955). Delay of reward and human motor learning. Psychological Reports, 1, 139-142.
Schaal, D. W., & Branch, M. N. (1988). Responding of pigeons under variable-interval schedules of unsignaled, briefly signaled, and completely signaled delays to reinforcement. Journal of the Experimental Analysis of Behavior, 50, 33-54.
Schwarz, M. L., & Hawkins, R. P. (1970). Application of delayed reinforcement procedures to the behavior of an elementary school child. Journal of Applied Behavior Analysis, 3, 85-96.
Sidman, M. (1960). Tactics of scientific research. New York: Basic Books.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York:
Skinner, B. F. (1953). Science and human behavior. New York: The Free Press.
Spence, K. W. (1956). Behavior theory and conditioning. New Haven: Yale University Press.
Sutphin, G., Byrne, T., and Poling, A. (1998). Response acquisition with delayed reinforcement: A comparison of two-lever procedures. Journal of the Experimental Analysis of Behavior, 69, 17-28.
Terrell, G. (1958). The role of incentive in discrimination learning in children. Child Development, 29, 231-236.
Terrell, G., & Ware, R. (1961). Role of delay of reward in speed of size and form discrimination learning in childhood. Child Development, 32, 409-415.
Vollmer, T. R., Borrero, J. C., Lalli, J. S., & Daniel, D. (1999). Evaluating self-control and impulsivity in children with severe behavior disorders. Journal of Applied Behavior Analysis, 32, 451-466.
Ware, R., & Terrell, G. (1961). Effects of delayed reinforcement on associative and incentive factors. Child Development, 32, 789-793.
Watson, J. B. (1917). The effects of delayed feeding upon learning. Psychobiology, 1, 51-59.
Williams, B. A. (1976). The effects of unsignaled delayed reinforcement. Journal of the Experimental Analysis of Behavior, 26, 44-449.
Williams, B. A. (1998). Relative time and delay of reinforcement. Learning and Motivation, 29, 236-248.
Williams, A. M., & Lattal, K. A. (1999). The role of the response-reinforcer relation in delay-of-reinforcement effects. Journal of the Experimental Analysis of Behavior, 71, 187-194.
Jolene Rachel Sy was born in Sacramento, CA in 1981 and was raised in the Sacramento area. In 1998, Jolene moved to Santa Cruz, CA to attend the University of California, Santa Cruz. After earning a Bachelor of Arts in language studies, Jolene began work as a behavior analyst working with children with ASD. It was at this time that she became interested in behavior analysis. In 2005, Jolene moved to Stockton, CA to attend the University of the Pacific (UOP). At UOP, Jolene served as a research assistant in Dr. John Borrero’s laboratory and earned a master’s degree under his supervision. In 2007, Jolene entered the doctoral program in behavior analysis under the supervision of Dr. Tim Vollmer. Jolene earned a doctor of philosophy in 2011, and will relocate to Saint Louis to begin an assistant professor position at Saint Louis