If you do that, though, you're adding a keyword-search to the posting process, such that it'll have to search through the index to match the current user-supplied keywords with other user-supplied keywords in other articles, and then draw out associated keywords that are not included in the current submission, in order to present them to you as possibilities. You could do that, but I'd want to know more about how often the search function we're already proposing would be used, how often articles are posted - and hence, how often the index search would be used in that context - and what kind of loads we're talking about for the server for all these new search functions. It's easy enough to imagine these things, especially when you have unlimited imaginary hardware to run it on, and someone else to sit down and do the actual coding ;)
Yeah, something like that. How much overhead would it require? Have no idea. I'm not a programmer. (Which, as you note, makes it easy to think up stuff for others to code!) I would guess, though, that you'd have your keywords in a separate database to which the keyword field in the database of articles (or article headers?) would have a one-to-many relationship. Then I suppose you could have some kind of optimized index describing the relationship of keywords to each other (maybe only tracking some subset of more commonly used keywords). You'd then update this index a couple times a day, at off peak hours.
'Course for all I know the keywords might just be a variable sized text-field within the article record, which would make all this more difficult I presume.