This is an old puzzle- I recall reading it in Ted Chiang and I think other places as well. It’s a great example of how seemingly reasonable intuitions can lead us astray.
Premise 1. There could exist a book that contains infallibly accurate information about the future.
Premise 2. A robot could read this book.
Premise 3. The book might predict that, at some particular moment, the robot will perform some mundane action, like raising its grasper.
Premise 4. The robot might be programmed to be a perverse robot, in the sense that if anything or anyone makes a prediction about what it will do, it will do the opposite.
But it seems that premise 3 & 4 can’t both be true, if the book is infallible and the robot has read it. It seems that, quite generally if the robot reads the book, the book cannot contain any predictions about what the robot will do voluntarily (assuming the robot’s programming remains intact and there are no errors).
But it feels weird doesn’t it? Infallible future telling may not exist in our world, but it seems logically possible. If infallible future telling exists, there seems no reason why any agent shouldn’t be able to access the results of that future telling, or suffer malfunction or abrogation of its programming if it it does.
The logical paradox here is, at heart, related to, or even identical to, the grand-father killing paradox in time-travel. As there, we must say that in a world where time travel or future telling is possible, any attempt to rewrite events (whether in a fixed past or known future) will always be thwarted.