Might I also point out the unstated assumptions in your question and in the fine answers thus far by Bill Thurston and Sergei Ivanov?
The unstated assumption is that
we are speaking about opaque objects casting opaque shadows,
and that all of the information content we obtain is merely in the outer boundary / envelope of the shadow, whether we are talking about a line segment in $\mathbb{R}$ projected by an object in $\mathbb{R}^2$, a 2-dimensional shadow in $\mathbb{R}^2$ cast by an object in $\mathbb{R}^3$, or by a 3-d shadow cast onto $\mathbb{R}^3$ by an object in $\mathbb{R}^4$, etc.
If we start talking about allowing for transparent and translucent objects which pass (or scatter) graded amounts of light based on their local density distribution, we can allow for non-constant shadows. These non-constant shadow projections can allow the observor to use many different algorithms to infer the interior distribution density function of the shadow-casting object.
This type of shadow analysis is used in Computerized Axial Tomography, also known as CAT scanning or CT scanning. Multiple 2-d images are acquired as an x-ray receiver (CCD) is rotated around the object to be probed while an x-ray source is also rotated around the object synchronously at a position on the opposite side of the object. The multiple acquired 2-d shadows of X-rays passing through the body can be used to reconstruct a fairly accurate rendition of the density distribution of the body using a Radon transform.
Prior to the axial rotation type of scanning, a linear type of scanning was used in a technique called Tomography. The subject is placed at the origin in 3-space, a single piece of x-ray film is placed at position ($-x$, $+y$, 0) at time $t_0$ in the $xz$ plane and translated to position ($+x$, $+y$, 0) at time $t_1$. The x-ray source is placed at position ($+x$, $-y$, 0) at time $t_0$ and is translated to position ($-x$,$-y$,0) at time $t_1$. This is akin to taking a photograph with a long exposure time, keeping it pointed at one point in space as the camera moves along a path in space. The resulting photographic image will have the sharpest focus at the "focal plane" $y=0$, while the objects at further distances along the $y$-axis from the $x$-axis will be progressively blurred.
There is a lot of very interesting mathematics involved in the signal acquisition and signal analysis of Axial Tomography and in the image reconstruction algorithms, as well as in MRI (magnetic resonance imaging).
Also, as an aside, no one has specified whether the "light sources" casting the shadows are point sources in the near-field at a particular distance $d$ within a few orders of magnitude of the size of the objects, or whether we are assuming that the light source is effectively a point source at infinity casting isometric projections. Also, the specific region receiving the shadow was not clearly defined. I only mention this because of the recent questions on Mathoverflow concerning the importance of rigour in mathematics, and because I do indeed believe in the importance of rigour in the formulation of the questions and, thus, in defining the restricted domains in which the succeeding mathematical calculations can be applied.
Allowing for non-opaque shadows makes it much more difficult to find another object in $\mathbb{R}^3$ whose projected shadow would match the intensity distribution of the projected shadow of a translucent sphere.