Object handovers between humans are common in our daily life but the mechanisms underlying handovers are still largely unclear. A good understanding of these mechanisms is important not only for a better understanding of human social behaviors, but also for the prospect of an automatized society in which machines will need to perform similar objects exchanges with humans. In this paper, we analyzed how humans determine the location of object transfer during handovers- to determine whether they can predict the preferred handover location of a partner, the variation of this prediction in 3D space, and to examine how much of a role vision plays in the whole process. For this we developed a paradigm that allows us to compare handovers by humans with and without on-line visual feedback. Our results show that humans have the surprising ability to modulate their handover location according to partners they have just met such that the resulting handover errors are in the order of few centimeters, even in the absence of vision. The handover errors are least along the axis joining the two partners, suggesting a limited role for visual feedback in this direction. Finally, we show that the handover locations are explained very well by a linear model considering the heights, genders and social dominances of the two partners, and the distance between them. We developed separate models for the behavior of ‘givers’ and ‘receivers’ and discuss how the behavior of the same individual changes depending on his role in the handover.