Robotic manipulation is fundamentally limited by data scarcity. We leverage egocentric human data to improve learning for humanoid robots with dexterous hands, bridging the embodiment gap through visual guiding keypoints and a shared action representation. Our approach enables better generalization and the acquisition of new skills without additional robot data.