It is quite interesting to look, if only casually, at the two trends that are emerging regarding proof assistants:
1) the “natural language” trend: one finds there the work of Ganesalingam & Gowers (which, so far, uses full first-order logic and deals with metric spaces), but also the interesting work of Stovanovic, Narboux, Bezem & Janicic (which, so far, uses a fragment a first-order logic and deals with Geometry à la Tarski). The latter paper discusses the mild differences between these two approaches, and in both papers one can see examples of proofs that these programs produce. See also Geoproof (maintained by Narboux), for example this nice animation shows a Coq verified interactive proof of Thales’ theorem.
2) the “univalent foundations” trend: Homotopy Type Theory (HoTT), with these slides by Voevodsky providing a recent account.
All this begs the question: will these two trends merge productively over the next decade? I do hope so (creating a setting with as sound foundations as one would wish for, while ensuring the certified proofs produced are nicely human readable). And in that case, I’m wondering what can be reasonably anticipated.
Probably the gradual emergence of just a big website where mathematicians would write down their proof strategies, which would get accepted or refused in an instant. Obviously, in parallel this would mean the gradual end of some aspects of the technical side of peer-reviewing, but humans would still be left to decide which theorem is/isn’t interesting, which and paper is/isn’t well written.
On a longer time scale, one may envision a time when, with computers just generating indiscriminately all possible theorems by raw enumeration, humans will get completely out the game, and will be left with reading what computers have found. In something like 30 or 50 years time, that’s a distinct possibility.