You’re also wrong about a single data point not making much of a difference in OLS regression with a modest number of data points: consider the two data sets
(1,1.1), (2,0.9), (3,1.0), (4,1.0), (5, 0.9), (6, 1.1)
and
(0, 0.0), (1,1.1), (2,0.9), (3,1.0), (4,1.0), (5, 0.9), (6, 1.1)
In the first, the regression line is y = 1
In the second, it has positive slope and a y-intercept less than 1.
In fact, this is a baby example of the phenomenon for which Briggs gave a more realistic example.
And, where the errant point is, does matter. If it had been (3.5, 0.0) inserted into the first data set to get the second,
the regression slope would still have been 0 and only the intercept (or rather the height of the whole horizontal regression line) would have changed.
Of course for less modest numbers, the outlier would need to be more extreme.
In the case of a hundred or so data points, for any reasonably self-consistent data set (eg. surface temperature) that isn't likely to see wild data points then any one does not have much of a chance of drastically influencing slope of the line.