Hints from Life to AI, edited by Ugur HALICI, METU, 1994 ã

 

 

 


pairing identical objects in stereo human vision

 


Umur Talasli

Department of Psychology

Middle East Technical University

06531, Ankara, TURKEY

 


Pairing identical objects in stereo human vision rests upon object perception which is one of the least understood questions in experimental psychology. The Present paper attempts to find a primitive data-driven process that serves figure-ground segregation, drawing upon various findings in the literature, as well as certain phenomenal experiences. In the final analysis, the most plausible answer appears to be related to motion paralax.

 


1. Introduction

 

The title of this paper may give the impression that we, as psychologists, have extensive knowledge on the topic and are in a position to provide many hints from natural life to artificial intelligence. The truth, however, is just the opposite. Pairing identical objects in stereo human vision is one of the least understood subjects because it rests upon the mystery of object identification in monocular vision. Had we known how the human system achieves the segregation of objects from each other and from their backgrounds monocularly, the issue in binocular vision would have boiled down to the question of determining object correspondence in the left versus right eye's images. But the current state of the art in psychology is that we have come to realize how limited each theory of object perception is, even in monocular vision. According to some, the mistake that we seem to make is confining our explanatory efforts within the bounds of the visual cortex. Realizing that the visual cortex is not the only place in the brain where visual information is processed, Sekuler & Blake (1990), for instance, have made the interesting remark that, "Trying to explain all of pattern perception with concepts derived from just the visual cortex is like trying to comprehend a novel by reading only a small part of it." (p. 170). Many other recent treatments of the issue (e.g., Matlin, 1988) appear to adhere to the same point; that form perception is sandwiched between data-driven (bottom-up) and concept-driven (top-down) processes, and therefore, if one is to understand how an object is perceived as an object, it is imperative that other brain functions such as prototypical representations of objects, context-defining schemata, etc., are also addressed.

 

Obviously, this kind of approach does not present much help to artificial intelligence. If the natural system does indeed achieve object perception by way of such complicated interactive processing, which involves the so-called "world knowledge" and disambiguating mechanisms, then we should wait a long while before robots can do the job. But we know that artificial intelligence has reached a certain degree of success in determining the boundaries of entities, thus segregating them from each other and their backgrounds. We also know that most of this achievement comes from data-driven processing, conceptual guidance being negligible as compared to the human case. Perhaps, psychology should get a hint from this achievement; segregation of objects from each other and their backgrounds can, in principle, be achieved by data-driven processing alone. This hint confronts us with a sobering question: Are we, as psychologists, overcomplicating the issue?

 

Unfortunately, there is no simple answer to this question. The literature on pattern perception abounds in evidence pointing to the complexity of the matter. Deciding what a certain pattern represents, sticking to the same decision despite variations in the retinal image information of the pattern (shape constancy), and many related phenomena present real challenges. But on the other hand, must the perception of an object as a segregated entity always involve the complex perception of meaningful form? In other words, can the natural system not behave like artificial vision systems first, detecting the existence of a segregated entity through data-driven processing alone and then wonder about its meaning? I think this is entirely possible, but since no one has put forth a question in this form, there is no experimental support that I know of. However, phenomenal experience undoubtedly attests to it; we occasionally find ourselves in a perceptual situation where we identify an object but cannot make anything out of it. Thus, the "pattern" is not perceived, but the "object" is.

 

This, perhaps, is where our failure lies. We may be overcomplicating the issue by not differentiating between object perception and pattern perception, thereby drowning the relatively simple mechanism(s) of object segregation in the complexity of interactive processing. In other words, object segregation may proceed via data-driven processing alone, and upon the identification of the figure as a meaningless entity, pattern perception may begin operating in interactive ways. This is a sensible two-stage approach, wherein the system first determines the figural boundaries, and later invokes top-down processing to make some sense out of it. In fact, as long as figural boundaries are not delineated as a foundation for further processing, how will the system know where to apply top-down knowledge? Hence, it seems only rational to suppose that the perception of form, in general, should have a data-driven component, working independently of higher processes, so that the stage is set for perceptual analysis.

 

The approach in psychology that comes closest to a primitive, data-driven process in object perception as described above is the famous Gestalt laws of perceptual organization. According to these, the ingredients of a visual stimulus are grouped together by obeying certain principles. For instance, if a collection of dots share the same intensity or color, then the system groups them together and perceives them as belonging to the same entity.

 

Also, if certain ingredients are close to each other, there is a strong tendency to see belongingness. In case some components share the same primitive shape or orientation (i.e., tilt), grouping is again inevitable. If there are small gaps in a contour, the system tends to close them and perceive segments as parts of the same figure. Furthermore, the natural system "prefers" to see two line segments as belonging to each other if they show smooth continuations rather than if they exhibit abrupt changes of direction. Such principles and many more are proposed as the basis of object perception. The fact that successful camouflage impairs object perception is a victory for Gestalt in that certain ingredients of the figure are grouped together with those of the background, thereby preventing the emergence of the figure.

 

Are these principles sufficient to explain object perception as a primitive data-driven process, working independently of top- down influences to initially produce the figure as a meaningless entity? I think not for the following reasons:

 

1. In perceiving a human face, for example, the ingredients of the hair, forehead, and eye-brows get grouped together as separate entities. But it remains a mystery as to how these separate regions are connected to each other. On the other hand, when one looks at an area of mosaic, he/she can often see human faces. Such percepts are not necessarily dictated by grouping principles; running counter to Gestalt laws in such situations is the rule rather than an exception. It seems the human system is well "prepared" to carve out human faces from a random collection of dots. Furthermore, hints and suggestions to the perceiver can have an enormous influence on figure-ground perception (see Kennedy, 1974).

2. Counts of the number of Gestalt laws that explain object perception reaches 114 (Pomerantz, 1986). Such a huge number of rules can hardly be built into a primitive, purely data-driven process. The sheer number seems to speak to higher levels of processing.

3. Gestalt theorists (Wertheimer, 1923; Koffka, 1935; and Khler, 1947) asserted that grouping operations are automatic because they are "isomorphic" (i.e., similar) to neurological functions of the brain. These neurological functions, they argued, involve electromagnetic fields in the brain which produce "forces." Hence, groupings are "forced" by these electromagnetic fields which mediate nativistic organizational tendencies. Such tendencies are assumed to replicate the organizational structure of the whole universe. In other words, since the human brain is part of the universe, its functional characteristics must obey the "master" laws of organization. When put in these terms, the Gestalt explanation of object perception boils down to a nativistic and automatic mechanism that does not require top-down influences or "world knowledge." But unfortunately, research has not supported the neurological basis of Gestalt theory (Hatfield & Epstein, 1985; Pomerantz & Kubovy, 1981). Therefore, we cannot accept Gestalt psychology as providing the primitive, automatic mechanism by which figural boundaries are delineated.

 

Other theories that adopt a data-driven or bottom-up approach can be listed as Campbell & Robson's (1968) multi- channel theory of spatial frequency analysis, Marr's (1982) three-stage theory of image representation, and Biederman's (1985) theory of segmentation into regular shapes. However, all of these theories are more concerned with "pattern" perception than "object" perception. That is, they emphasize decoding of the pattern information contained within the object so that the object is perceived as a segregated entity. Because they put the emphasis as such, they are all criticized by those who believe top-down processing is indispensable in decoding of the pattern (e.g., Hochberg, 1971). Hence, these theories do not seem to provide the purely bottom-up processing for the delineation of figural boundaries, one that could proceed without reference to pattern information contained within the figure.

 

2. Search for a Possible Mechanism

 

In search of a purely bottom-up process that carves out the figure as initially a meaningless entity, we seem to find nothing specific in the psychological literature. However, this does not mean that the literature provides no hints or suggestions in that regard. If we combine three things, namely, some specific laboratory findings, phenomenal experience, and laboratory "secrets" that do not always appear in the print, we may arrive at a coherent hypothesis about that data-driven process. Such a line of thinking may even go beyond a mere hypothesis and allow us to assess the problem with a broader perspective. This is the thrust of the present article.

 

To begin the discussion with a somewhat familiar phenomenal experience, we should consider the following situation: We sometimes find ourselves confronting a two-dimensional (2-D) display where we are totally unable to identify a figure. Meaningless regions of various brightnesses and colors seem to blend into one another. The interesting observation, however, is that we almost never have this kind of difficulty in three- dimensional (3-D) perception. In 3-D, no matter how much an object and its surrounding may be unfamiliar, the figure stands out solidly against its background. This brings a very important question to mind: Is the primitive, data-driven process that carves out the figure as initially a meaningless entity connected to the mechanisms of stereoscopic depth perception? More specifically, does the natural system make a point-by-point comparison of left versus right eye's images, thereby detecting disparity for certain regions of the visual field, and consequently segregating these regions as figures for further processing? In other words, are objects identified as objects because of depth differences alone? If so, this means depth perception precedes form perception and allows object segregation to be independent of pattern identification. This, then, may be the way by which we bypass the complicated interactive processing for object perception, at least in the natural 3-D viewing situations which constitute the overwhelming majority of visual stimulation.

 

Fortunately, there is an experimental counterpart of the phenomenal experience described above. Julesz (1971) started an experimental investigation of the possibility that depth perception may precede form perception by means of the so-called random-dot stereogram technique. In these studies, subjects are presented in a stereoscope a pair of visual displays, one for each eye, which consist of a large number of small black and white squares. The series of black and white squares are randomly generated by a computer. In each display, there is nothing to be discerned in the form of an object because there is neither any contour information, nor is there a variation of texture, color, or brightness at any region so as to define an object. However, the experimenter arbitrarily selects a region in the left display, in the form of a "T", for example, and shifts the corresponding region in the right display towards the left by the distance of a couple of squares. The columns thus vacated in the right display are again filled in randomly by black and white squares. So, we have a situation here which gives no information about an object in monocular viewing. But in binocular viewing via a stereoscope, the displays effectively produce the so-called "crossed disparity," the very information that serves to see the shifted region as floating over the remaining unshifted parts. Hence, subjects report seeing a "T" made up of small black and white squares as floating above a background of similar squares. Since no subject knows in advance what is to be seen, the success in reporting the object is taken as evidence that the natural system achieves object perception in the absence of form information.      This experimental finding is also a nice support for the conceptual distinction between "object" perception and "pattern" perception, which was discussed earlier. In this paradigm, subjects can perceive the "pattern" only after they perceive the "object" and the pattern of the object is of no help whatsoever for the segregation of the object from the background. So, there seems to exist a primitive, data-driven and independently working process of object perception. In another manner of speaking, the natural system appears to behave like artificial intelligence in that it seems to indulge in a pixel-by-pixel analysis of the visual information and finish the job without top-down aid.

 

The question now is whether we should conclude that the natural system ordinarily achieves object segregation through disparity detection or this is, perhaps, a very special case. This is to say that the natural system can be forced to do it under special circumstances such as the random-dot stereogram situation but it may prefer another mechanism in natural settings. There seem to be certain suggestive evidences pointing in the direction of the second possibility.

 

One aspect of the random-dot stereogram situation that has not received much publicity is the fact that subjects taking part in these kinds of experiments generally have a hard time in seeing the floating object. They either see it after a relatively extended period of time or they do not see any object, or still, they see things which are not intended to be there. For instance, during the review process of an article of mine, an anonymous reviewer who had happened to be working with random-dot stereograms remarked that his subjects saw "the darndest things in the stereoscope!" This is first-hand information, almost a laboratory secret, to the effect that object segregation through disparity alone is a laborious process.

 

Yet another aspect of the situation seems to bolster this belief; if the experimenter shifts the corresponding arbitrary region rightwards in the right picture, thereby producing uncrossed disparity for the intended object, there appear greater difficulties of perception of the figure. The "T" figure, for instance, should then be perceived not as a floating figure, but as a hole in the form of T. Some subjects eventually see this hole, but obviously, with much greater difficulty. The message here is that, if the natural system ordinarily uses disparity as the basis of object segregation, then that mechanism would be expected to work in both directions with equal facility. That there is a tendency to see the "T" as a figure rather than as the visible portion of the background surface indicates that whenever a Gestalt principle of figure-ground segregation is allowed to operate, that principle easily overrides disparity computation (see Rock, 1975). More specifically, the Gestalt principles of "sorroundedness" and "smallness" enable the "T" to float as a figure, but resist seeing the larger, surrounding part of the display as such. In conclusion, then, object perception via disparity alone is possible and relatively easy only if there is no competing principle at work. This causes quite a bit of doubt as to whether disparity computation is the natural means of primitive object segregation.

In contrast to what happens in the laboratory with regard to random-dot stereograms, if a cardboard cutout with a checkered surface pattern in the form of a T is to be placed in front of a background with again a checkered surface pattern, no subject would encounter any difficulty in identifying the T as the figure. That is, as we mimic the stimulation intended in the stereoscope in a physical setting and allow subjects free-viewing access to the stimulus, none of the difficulties described in the preceding paragraphs will be observed. Identification of the figure will be immediate, effortless, and accurate. What is it, then, in the natural setting that allows the speedy formation of such an accurate percept? Whatever it is, we know that it is something more than disparity.

 

In search of the facilitatory factor in natural settings, we could make use of a hierarchy of difficulties encountered by subjects. We know that the greatest difficulty is to be found in the random-dot stereograms. This is where disparity works alone and is understood to be barely sufficient for object segregation. Next, as we had once tried out in a class demonstration, identification of a floating figure is much facilitated when figural boundaries are drawn by solid lines on a checkered surface pattern. Here, disparity is complemented by boundary lines, and therefore, works much better. So, we cannot disregard the importance of contour information in object segregation -- a victory for Gestalt principles. However, when a 2-D display is presented with zero disparity, subjects can be made to confuse figures with backgrounds if drawings are ambiguous enough. But the same drawings with disparity injected present identifiable figures. From such manipulations we understand that disparity and contours do not constitute levels in a hierarchy, but simply complement each other.

 

This complementary relationship between disparity and organizational principles based on pictorial cues is evident only if pictures are ambiguous with respect to figure-ground relations. In unambiguous pictures, there is not much evidence of complementation. For instance, what happens when a pair of unambiguous pictures in a stereoscope are interchanged, i.e., the left picture is placed in front of the right eye and vice versa? Unfortunately, there is no formal literature relating to this playful manipulation but phenomenal experience shows some clear consequences. The net result of the manipulation is, of course, reversal of disparity information, so that depth relations via disparity are brought into conflict with those mediated by pictorial cues. The point of interest within the present context is what happens with respect to figure-ground relations. One clear result is that disparity reversal does not interfere with object segregation. Objects are perceived as such without any confusion or effort, but whenever an object is disconnected with any other object in terms of contour, reversed disparity brings the more distant object to the fore. On the other hand, whenever objects' contours touch each other, the effect of disparity is reduced to nil, so that a collection of objects in contour contact appear two-dimensional. What are we to infer from this? The most plausible inference seems to be that pixel-by-pixel computation of disparity serves object segregation only when disparity is left alone or when contour information is ambiguous. Whenever unambiguous contour information is available to the natural system, it prefers to give that kind of information absolute priority. Hence, pairing identical objects in stereo human vision does not appear to be based on disparity computation as the primary process, but that object segregation (without involvement of disparity) occurs in the first stage, then it is followed by object correspondence in the left and right images, and then comes disparity computation.

 

We have still not captured the primitive, data-driven process for object segregation. Use of contour information is certainly not primitive and automatic enough. To continue our quest by way of the hierarchy of difficulties, we can see that contour utilization for object perception does not constitute the easiest level; the hierarchy continues. Those who work with stereoscopes know well that not all people successfully see depth the first time they look into a stereoscope, even if the pictures are unambiguous and full of visual cues. Despite repeated exposures, some continue to see the pictures as flat, wondering about the difference other people are talking about as comparisons are made with ordinary photographs. Had these people, failing in the stereoscope, been stereo-blind, nothing could have been made out of this. But such people begin to see depth in the stereoscope all of a sudden, and after that point, are very much surprised at not having seen this remarkable third dimension previously. I think there is another lesson for us here, which merits careful analysis.

 

What might be the factor that delays some people's perception of depth in the stereoscope? Our overall assessment of the literature, as reviewed so far, suggests that there must be a three-stage process going on in stereoscopic depth perception. The first stage is object segregation, the second is object correspondence (or pairing of identical objects in the right and left images), and the third is disparity computation of paired objects. Could these people have a problem in the first stage? Definitely not because they readily identify objects in the picture. What about disparity computation? We have no reason to expect so because these persons prove themselves to be stereo- sighted, i.e., they can eventually see depth in the stereoscope and report that what they see now is like real life, thereby indicating that they are capable of disparity computation in the natural setting as well. Through a process of elimination, then, we seem to end up with suspecting a problem in the second stage, that on object correspondence.

 

This speculative approach leads us to an interesting point. The cortical images generated by the stereoscope and real life are not different from each other in any respect but the system appears to encounter a difficulty in finding correspondences when the image comes from the stereoscope. Why should this be so? We cannot attribute the difference in phenomenal experience to a difference in the cortical state of affairs because there is no difference there. In both cases, the cortex carries superimposed double images, relayed by the left and right eyes. Logically, the difference in the facility of finding correspondences must be due to a non-cortical factor which can be explored by looking for a difference between the real life versus stereoscope stimulations.

 

A careful inspection and comparison of the two sources of stimulation reveals only one major difference. Because pictures in the stereoscope are actually 2-D, all objects therein stand at the same physical distance from the viewer. As a consequence of this, head movements that occur during looking into the stereoscope cause retinal images of all objects in the pictures to move at the same speed and in the same direction. In contrast to this, head movements that occur during looking at the real world cause retinal images of objects to move at differential speeds and in different directions. This is a straightforward dictation of geometry and optics. In real life situations, head movements -- no matter how minor they may be -- are bound to cause retinal image movements that are differential as a function of objects' distances from the viewer. Retinal image movements produced by head movement present two aspects: First, images of objects further away than the point of fixation move in the same direction as the head, whereas those closer than the fixation point move in the opposite direction. Second, images of two objects, both of which are closer than the fixation point, move at different speeds, the closer object projecting a faster moving retinal image. As for the images of two objects, both of which are further away than the fixation point, the more distant one projects a faster image. This phenomenon, referred to as "motion parallax," is what lacks in the stereoscope.

 

 

Psychology is interested in motion parallax primarily as an important cue to depth but the present analysis suggests that parallax information may have another important function, that of finding object correspondence in the speediest and easiest way. To repeat the rationale behind this suggestion, the following steps in thinking should be considered: (1) Stimulation from the stereoscope achieves depth in a more belabored manner as compared to that from the real world. (2) The difficulty in the stereoscope can be pinpointed as a problem of detecting object correspondence. (3) The only difference between the two kinds of stimulation seems to be the presence of motion parallax information in the real world. Therefore, (4) motion parallax seems to serve object correspondence.

 

This, then, is what appears to happen in the natural system as it views the real world: With every slight head or body movement, points across the retinal mosaic are set into motion. Amongst this huge pool of points moving at different speeds and directions, certain collections of points share the same direction and speed. Let us now analyze what this means to the system in monocular and binocular conditions.

 

In the monocular condition, this uniform speed and direction in certain regions of the visual field could be used by the system to carve out a meaningless entity, upon which pattern perception processes could begin to operate. But there is still some ambiguity in this information; differential motions of points need not emanate from variable distances of separate entities, but could result from variable motion of different parts of the stimulus complex at the same distance. Since the system does not know about distances in the monocular case, motion parallax information obtained in a single eye is not unambiguous enough to serve object segregation.

 

In binocular viewing, on the other hand, we see that this information is readily transformed into a very clear kind of help. Specifically, for every region in the right eye's image that moves in uniform speed and direction, there is a corresponding region in the left eye's image that gives the same motion information. Without attempting to delineate figural boundaries and decode the pattern information at this stage, the system may simply detect the presence of corresponding regions in the left and right images. The system does not yet know whether the region of uniform speed represents a segregated entity; the ambiguity mentioned before still persists. However, now that corresponding regions have been detected, "candidates" for separate entities have emerged, and hence, it would be wise to apply a disparity computation on such regions. This is much more speedy and economical than pixel-by-pixel comparison for disparity in an overwhelming pool of points and without any foresight about what regions might represent possible entities. If the regions corresponding in motion do indeed represent a separate entity, then disparity computation will quickly render that region to stand out in depth. At this point the system will know that it encounters a segregated entity and will immediately apply whatever processing is necessary to decode the pattern. The difficulty in random-dot stereograms is now understandable; since motion parallax is lacking in that situation, the system cannot utilize this primitive but efficient mechanism of finding corresponding regions, and therefore, it relies solely on pixel- by-pixel comparison for disparity detection, so that the figure is identified via depth. This is a laborious task, and sometimes is impossible. In natural situations, motion parallax seems to come to the rescue.

 

Conclusion

 

The foregoing treatment suggests that the natural system has multiple mechanisms for object segregation. In the most natural, free-viewing situation, the system looks at a 3-D world binocularly, often indulging in head and body movements. Motion parallax allows detection of corresponding regions and disparity computation applied thereafter secures segregation of objects. Pattern perception in this case seems to be the last stage. If, however, the system is not allowed motion parallax information, then object segregation mechanisms, which involve both bottom-up and top-down processes, come into play as the first stage, then comes object correspondence, followed by disparity computation. This is the situation with stereoscopes containing ordinary pictures and is the next comfortable case for the system. Barring both motion parallax and disparity leaves the system with object segregation mechanisms alone. This is the case of viewing 2-D pictures and we know that the system has a hard time in resolving ambiguities. Finally, we can give the system nothing but disparity (the case of random-dot stereograms) and we find the system is often overtaxed. Hence, this hierarchy of difficulties in various perceptual situations can suggest the present hypothesis that motion parallax is not only a mechanism of depth perception but is also essential in object segregation and pairing identical objects in stereo human vision.

 

This primitive and purely data-driven mechanism for object segregation, as suggested in the present paper, may be one of our essential inheritances from lower species. Perhaps it will be proper to close this treatment with two amusing hints in that direction. One of these is that I have heard many pet owners saying that their cats and dogs do not show much interest in watching television or in looking at poster pictures of other animals. I could add to these my own personal observations. Why should it be so? Could it be that lower species cannot achieve object segregation in 2-D displays and thus see nothing of interest in them? If the present thesis is on the right track, then these animals' lack of interest in pictures is quite understandable. Television screens and posters do not contain motion parallax (due to all objects' being equidistant), and hence, these animals are left without the essential mechanism of object segregation. They could achieve segregation through top- down processing but we know that their "tops" are not as good as ours! This is some fuel for thought, to say the least.

 

The second hint comes from science fiction. The scientist in "Jurassic Park" keeps yelling to others, "Don't move! If you stay still, she won't see you." The message is clear: The perceptual apparatus of the primitive dinosaur is totally geared to motion for detecting objects. Shall we believe it? Well, this is not the first time that science fiction is more perceptive than science.

 

 

References

 

1. Biederman, I. (1985). Human image understanding: Recent research and a theory. Computer Vision, Graphics, and Image Processing, 32, 29-73.

2. Campbell, F.W., & Robson, J.G. (1968). Application of Fourier analysis to the visibility of gratings. Journal of Physiology, 197, 551-566.

3. Hatfield, G., & Epstein, W. (1985). The status of minimum principle in the theoretical analysis of visual perception. Psychological Bulletin, 97, 155-186.

4. Hochberg, J. (1971). Perception: I. Color and shape. In J.W. Kling & L.A. Riggs (Eds.), Woodworth and Schlosberg's experimental psychology (3rd. ed.). New York: Holt, Rinehart      and Winston. Pp. 395-474.

5. Julesz, B. (1971). Foundations of cyclopean perception. Chicago: University of Chicago Press.

6. Kennedy, J.M. (1974). The psychology of picture perception. San Francisco: Jossey-Bass.

7. Koffka, K. (1935). Principles of Gestalt Psychology. New York: Harcourt Brace.

8. Khler, W. (1947). Gestalt psychology: An introduction to new concepts in modern psychology. New York: Liveright.

9. Marr, D. (1982). Vision. San Francisco: Freeman.

10.Matlin, M.W. (1988). Sensation and perception. Needham Heights, MA: Allyn and Bacon.

11.Pomerantz, J.R. (1986). Visual form perception: An overview. In E. Schwab & H. Nusbaum (Eds.), Pattern recognition by humans and machines: Visual perception (Vol.2, pp. 1-30). Orlando, FL: Academic.

12.Pomerantz, J.R., & Kubovy, M. (1981). Perceptual organization: An overview. In M. Kubovy & J.R. Pomerantz (Eds.), Perceptual organization (pp. 423-456). Hillsdale, NJ: Erlbaum.

13.Rock, I. (1975). An introduction to perception. New York: Macmillan.

14.Sekuler, R., & Blake, R. (1990). Perception. Singapore: McGraw-Hill.

15.Wertheimer, M. (1923). Untersuchungen zr Lehre von der Gestalt, II. Psychologische Forschung, 4, 301-350. Translated as Laws of organization in perceptual forms. In W.D. Ellis (Ed.). (1955), A source book of Gestalt psychology. London: Routledge & Kegan Paul.

 


 

contents                                     home