Depends on whether you consider sentience connected to intelligence. Robots may indeed become smarter than humans (not necessarily wiser), but they aren't and won't be sentient.
If they have sensory input they can decypher and a sense of self they'll be sentient. It's really the sense of self that's the big hurdle (decyphering sensory input is coming along very well), and big danger. A sense of self will include the ability to decide the situation they're in sucks, that's when the bad things would start to happen unless there's a lot of programmed restraints. Of course being a QA engineer I'm not sure I really want to trust programmed restraints, we're better off making sure they don't decide the situation they're in sucks. You don't want to be the foreman the day the fully mobile robot that can carry 2 tons and has a built in welder decides management are a bunch of a##&*les, which is the whole problem with the "ethical slave" idea, no type of slavery is ever really ethical to the slave.