None of this is in any way related to what people actually do or how people actually talk about things when they aren't interacting with a very particular class of bureaucrats and consultants!
When a colleague from the private sector is looking for new graduates to hire, they don't ask me if some particular skill or topic is on our list of Program Objectives or whatever. They just ask me how much experience my students have with that particular skill or topic. Ditto when a colleague in academia (a world in which we are all under administrative orders to embrace assessment) asks me if I have any good students to recommend for graduate study. They don't ask me about our program objectives, they just ask me if I have smart, reliable students who have extensive experience with a particular topic or skill. When I answer their question, I don't consult a list of Student Learning Outcomes. I know what is covered in the courses, and what students do in various research labs, so I give an informed answer.
Likewise, when a colleague embarks on teaching a course that they haven't taught before, they usually go to colleagues and ask questions like "What do you cover?" or "What book do you use?" or "What level do you cover this at?" or "In your experience, what do the students have difficulty with?" Nobody uses the language of assessment bureaucrats in these conversations. My department actually has a few faculty with extensive training in educational research, and while some of that jargon might appear in their publications they do not (in my experience) use that language when interacting with people in face-to-face environments and discussing things that they will actually do.
A tempting rejoinder is that this is all well and good for higher education insiders when interacting with each other, but how could an outsider know what is going on without documents describing what is going on? That's a fair point, except that in reality outsiders rely on contacts and networks and experience, not documents. Part of the reason is that reliance on networks and informal contacts is the way that humans have done business for a very, very long time. Everyone knows that the next village over has some excellent copper smiths, and their apprentices are well-trained. If you need somebody to work copper, you get one of them. Silk traders know which of the villages along the Silk Road have particularly reliable guides and bodyguards, so they know where to go to hire help.
Ah, but we aren't in the bronze age anymore! We're in the 21st century! Well, yes, but even in the 21st century the various tech industry sectors in Silicon Valley make extensive use of headhunters. You'd think that if there were just one place that would embrace transparent documentation for identifying human talent it would be Silicon Valley. However, Silicon Valley makes extensive use of headhunters who know individuals, and also relies on the reputations of schools. I can assure you that when Silicon Valley firms recruit from Stanford and Carnegie Mellon it isn't because somebody published an impressive list of Program Outcomes. I can assure you that even my own university, not in the leagues of Stanford and Carnegie Mellon but nonetheless a widely-respected school for engineering, architecture, and agriculture, acquired its reputation via something other than impressive lists of Student Learning Outcomes.
OK, so we're up against a trifling force like human nature, but when has that ever stopped a technocrat?
Well, we're also up against the need for the very sort of innovation and "disruption" that the technocrats claim to love so much. In the past few years several of us in my department have done a pretty wide range of innovative things in project-based classes. From the adoption of new simulation software to novel formats for final projects to the incorporation of peer review into laboratory courses to the introduction of "learning assistants"*, we've seen a lot of innovation in my department. I sincerely think that some of our advanced courses are much better-structured than the analogous courses that I took. However, none of this would have happened if each innovation were accompanied by submission of paperwork outlining revised Course Objectives, Student Learning Outcomes, etc. Can you imagine submitting new paperwork to the massive university bureaucracy every time somebody decides to take their lab class to the next level? It would be the most insane thing ever. No innovation would ever happen if we stopped to produce extensive paperwork and revise the Strategic Plan or whatever!
I have no easy answer to the question "How should academic programs be evaluated?" What I can say is that the managerial mindset that they've been trying to push down our throats for more than a decade is completely disconnected from what actually happens when educators interact with each other, with their students, and with those who would like to employ their students.
*Think TAs, except undergrads, and with much more supervision and far narrower scope. I'm not always sure that the LAs are doing as much for the students as we would like, but at the very least we know that when you teach a subject you learn it far better than you would have otherwise, so we can be certain that the Learning Assistant is learning.