Colleges start new academic programs

East Central University, in Oklahoma, is starting a master’s program on water policy.
New York University is adding a social impact, innovation and investment specialization in its master of public administration degree in public and nonprofit…

Teaching and Learning
Editorial Tags: 
Is this breaking news?: 

Conference spotlights effort to send more students from minority-serving institutions abroad

WASHINGTON, D.C. — The Council on International Educational Exchange has pledged to fund 10,000 passports for students who apply at “Passport Caravan” events it’s holding at campuses across the country. To date, CIEE’s presiden…

WASHINGTON, D.C. -- The Council on International Educational Exchange has pledged to fund 10,000 passports for students who apply at “Passport Caravan” events it’s holding at campuses across the country. To date, CIEE’s president and CEO, James P. Pellow, said the Passport Caravan project has issued 2,800 passports, 367 of which were funded by colleges that matched CIEE’s commitment, while the remainder have been funded by CIEE, a nonprofit exchange organization based in Portland, Me., that partners with universities to provide study abroad programs.

“What’s really interesting about the students that show up for these Passport Caravan drives is that they’re highly diverse and highly challenged economically,” Pellow said Monday at the opening plenary for the second annual Generation Study Abroad Summit organized by the Institute of International Education. “About 55 percent of students are students of color and about 48 percent are Pell eligible. It’s a very rich, very diverse group of students that typically do not participate in our semester programs who now have a passport in their sock drawer.”

This week’s summit in Washington is part of the Generation Study Abroad initiative, launched by IIE in 2014, with the goal of doubling the number of Americans studying abroad by 2020 and diversifying the pool of students participating.

More than 700 educational institutions or organizations, including 408 U.S.-based colleges or universities and 189 international universities or organizations, as well as education associations, study abroad providers and government entities, have signed on to the Generation Study Abroad initiative by making pledges aimed at increasing study abroad participation.

IIE on Monday announced that 12 partners had met the commitments they’d made to increase study abroad participation using a variety of strategies that include developing new short-term study abroad programs for freshmen, expanding the size of the study abroad office, creating a database of study abroad programs by field of study, designating faculty members as study abroad advisers and providing scholarship funding. Generation Study Abroad partners have collectively raised more than $55 million for study abroad scholarships, according to IIE.

“Just two years after joining Generation Study Abroad, colleges and universities across the country are seeing measurable results in their study abroad participation rates,” Allan Goodman, IIE’s president and CEO, said in a written statement. “Studying abroad is one of the best ways to prepare to enter and succeed in the interconnected, globalized workforce, yet 90 percent of American college students do not study or intern outside of the United States. We owe it to the next generation of Americans to explain why study abroad is more crucial than ever and to find ways to make it more accessible to a wider range of students.”

One of the sessions on Monday focused specifically on increasing study abroad participation at minority-serving institutions. Pellow of CIEE -- one of the lead organizational sponsors of the conference -- spoke on that panel as well. CIEE has joined with the University of Pennsylvania’s Center for Minority Serving Institutions in a three-year partnership intended to increase study abroad participation at MSIs through presidential and faculty training workshops and student scholarships. In the spring CIEE pledged to donate all exhibitor fees for its annual conferences from 2016 through 2018 for study abroad scholarships for students from minority-serving institutions.

Minority-serving institutions educate about 20 percent of all U.S. undergraduates, according to Marybeth Gasman, a professor of higher education at Penn and the director of the center. But, as Gasman noted in her presentation on Monday, students from MSIs made up about 3.6 percent of all American college students going abroad in 2012-13 -- 10,573 out of 289,408 students.

“One of the reasons why we all came together is that we want to change that,” Gasman said. “The majority of students who study abroad are white” -- 74.3 percent, according to the latest national data collected by IIE -- “and we also know that we don’t have enough students from minority-serving institutions that are studying abroad. We are trying to fundamentally change that in our partnership.”

David Wilson, the president of Morgan State University, a historically black university in Baltimore, spoke of increasing international student enrollment -- now around 11 percent -- and growing study abroad participation at his campus.

Wilson leads an alliance of more than 40 HBCUs that formed in response to a Chinese government commitment to provide 1,000 scholarships for students from historically black institutions. Morgan State’s first group of 10 students went to China in 2015 and a second group of 14 went last summer. In his PowerPoint presentation, Wilson highlighted several other outbound initiatives in Africa and Latin America including a health-oriented program in Cameroon, a program focused on teaching English as a foreign language in Colombia and a trip by the university’s choir to Cuba this past summer. (Statistics presented by Gasman earlier in the presentation suggest that students at HBCUs differ somewhat from the overall student profile in terms of choice of destination. While the top destinations for all U.S. students studying abroad are the United Kingdom, Italy, Spain, France and China, for students at HBCUs China is the top destination, followed by Ghana, Spain, France and Brazil.)

Wilson said that Morgan State has established partnership agreements with more than 30 international universities in Africa, Asia, Europe, the Middle East and Latin America and the Caribbean. “The major barrier for us at Morgan is the ability to obtain the financial resources that are necessary to both support the student at Morgan who wants to go abroad and then to really operate these agreements in the way in which they were designed to operate -- that is, to provide opportunities for other students in other countries to come and then to send our students to these campuses for a semester or two,” Wilson said.

The other main barrier Wilson identified for MSIs was also resource related -- “the ability to provide support to enhance foreign language programs on HBCU and MSI campuses, to stimulate student interest in the language, culture, art and society.”

Diversity
Global
Study Abroad
Is this breaking news?: 

Working paper explores effect of online education growth on higher ed market

A federal rule change that opened the door to more fully online degree programs has not made college tuition more affordable, according to a working paper from the National Bureau of Economic Research, but at some place-based institutions, enrollment has declined and instructional spending has increased as a result.

The paper, written by David J. Deming, Michael Lovenheim and Richard W. Patterson, explores what has happened in the postsecondary education sector in the years following the removal of the 50 percent rule. The rule change, which occurred in February 2006, meant colleges that enrolled more than half of their students in fully online programs could participate in federal financial aid programs.

At the time, much of the debate centered around what the rule change would mean to the for-profit sector. The paper expands beyond that debate, exploring how competition from online education providers affected the existing market for higher education. More specifically, it looks at the impact of online education in areas traditionally served by one or a handful of colleges that sustained themselves by enrolling local students.

“In a well-functioning marketplace, the new availability of a cost-saving technology should increase efficiency, because colleges compete with each other to provide the highest-quality education at the lowest price,” the paper, which has not yet been peer reviewed, reads. “Nonselective public institutions in less dense areas either are local monopoly providers of education or have considerable market power. Online education has the potential to disrupt these local monopolies by introducing competition from alternative providers that do not require students to leave home to attend.”

The researchers, who are based at Cornell University, the Harvard Graduate School of Education and the United States Military Academy, used data collected by the federal government’s Integrated Postsecondary Education Data System to track enrollment, revenue, expenditure and tuition trends between 2000-2013 — before and after the rule change. They used the data to test three predictions: that competition from more fully online programs would lower tuition rates for face-to-face programs, lead to increased spending on instruction and student support services, and drive down enrollment in areas with low competition between colleges.

The data only provided support for two of those predictions. For one, colleges located in areas with low competition were more likely than others to experience a decline in enrollment. The finding was only statistically significant for less selective private institutions, which saw enrollment declines following the 2006 rule change. Instructional spending also increased at public institutions, though the trend began before the rule change — perhaps because the colleges were anticipating increased competition from online programs, the paper suggests.

The researchers did not find any evidence that the growth of the online education market lowered the price of tuition for face-to-face programs, however. In fact, at some private four-year institutions in markets with little competition, tuition rates actually increased.

The paper presents two possible explanations why. It could be that private colleges were forced to increase tuition as a result of the enrollment declines that the researchers discovered. Alternatively, the availability of federal grants and loans could be making price less of an issue to students, which could explain why tuition rates didn’t decline.

“This evidence suggests schools likely are not competing over posted prices, which is sensible given the sizable subsidies offered to students through the financial aid system,” the paper reads. “Indeed, if prices are difficult for students to observe, higher competition could cause an increase in posted prices that are driven by university expansions in educational resources.”

Russell Poulin, director of policy and analysis for the WICHE Cooperative for Educational Technologies, said in an email that he agreed with the paper’s findings on the impact of online education in less competitive markets. However, he criticized the paper for assuming colleges are drawn to online education mainly for cost-cutting reasons, as opposed to expanding their market share and serving adult learners.

“Rather than cannibalizing existing student enrollments, online education often serves adults who are time and place bound,” Poulin wrote. “They were not in the market and now are.”

In related news, a separate paper released by the NBER explored enrollment trends in the Georgia Institute of Technology’s low-cost online master’s degree program in computer science. The study found (as Inside Higher Ed previously reported) little overlap between applicants to the program and the residential program it was based on, suggesting Georgia Tech had tapped into a population of potential students not served by existing master’s degree programs.

Online Learning
Editorial Tags: 
Image Source: 
Illustration Photo
Is this breaking news?: 

A federal rule change that opened the door to more fully online degree programs has not made college tuition more affordable, according to a working paper from the National Bureau of Economic Research, but at some place-based institutions, enrollment has declined and instructional spending has increased as a result.

The paper, written by David J. Deming, Michael Lovenheim and Richard W. Patterson, explores what has happened in the postsecondary education sector in the years following the removal of the 50 percent rule. The rule change, which occurred in February 2006, meant colleges that enrolled more than half of their students in fully online programs could participate in federal financial aid programs.

At the time, much of the debate centered around what the rule change would mean to the for-profit sector. The paper expands beyond that debate, exploring how competition from online education providers affected the existing market for higher education. More specifically, it looks at the impact of online education in areas traditionally served by one or a handful of colleges that sustained themselves by enrolling local students.

“In a well-functioning marketplace, the new availability of a cost-saving technology should increase efficiency, because colleges compete with each other to provide the highest-quality education at the lowest price,” the paper, which has not yet been peer reviewed, reads. “Nonselective public institutions in less dense areas either are local monopoly providers of education or have considerable market power. Online education has the potential to disrupt these local monopolies by introducing competition from alternative providers that do not require students to leave home to attend.”

The researchers, who are based at Cornell University, the Harvard Graduate School of Education and the United States Military Academy, used data collected by the federal government’s Integrated Postsecondary Education Data System to track enrollment, revenue, expenditure and tuition trends between 2000-2013 -- before and after the rule change. They used the data to test three predictions: that competition from more fully online programs would lower tuition rates for face-to-face programs, lead to increased spending on instruction and student support services, and drive down enrollment in areas with low competition between colleges.

The data only provided support for two of those predictions. For one, colleges located in areas with low competition were more likely than others to experience a decline in enrollment. The finding was only statistically significant for less selective private institutions, which saw enrollment declines following the 2006 rule change. Instructional spending also increased at public institutions, though the trend began before the rule change -- perhaps because the colleges were anticipating increased competition from online programs, the paper suggests.

The researchers did not find any evidence that the growth of the online education market lowered the price of tuition for face-to-face programs, however. In fact, at some private four-year institutions in markets with little competition, tuition rates actually increased.

The paper presents two possible explanations why. It could be that private colleges were forced to increase tuition as a result of the enrollment declines that the researchers discovered. Alternatively, the availability of federal grants and loans could be making price less of an issue to students, which could explain why tuition rates didn’t decline.

“This evidence suggests schools likely are not competing over posted prices, which is sensible given the sizable subsidies offered to students through the financial aid system,” the paper reads. “Indeed, if prices are difficult for students to observe, higher competition could cause an increase in posted prices that are driven by university expansions in educational resources.”

Russell Poulin, director of policy and analysis for the WICHE Cooperative for Educational Technologies, said in an email that he agreed with the paper’s findings on the impact of online education in less competitive markets. However, he criticized the paper for assuming colleges are drawn to online education mainly for cost-cutting reasons, as opposed to expanding their market share and serving adult learners.

“Rather than cannibalizing existing student enrollments, online education often serves adults who are time and place bound,” Poulin wrote. “They were not in the market and now are.”

In related news, a separate paper released by the NBER explored enrollment trends in the Georgia Institute of Technology's low-cost online master's degree program in computer science. The study found (as Inside Higher Ed previously reported) little overlap between applicants to the program and the residential program it was based on, suggesting Georgia Tech had tapped into a population of potential students not served by existing master's degree programs.

Online Learning
Editorial Tags: 
Image Source: 
Illustration Photo
Is this breaking news?: 

Book explores uses and abuses of data to measure science at the institutional and faculty levels

“Since the first decade of the new millennium, the words ranking, evaluation, metrics, h-index and impact factors have wreaked havoc in the world of higher education and research.”

So begins a new English edition of Bibliometrics and Research Evaluation: Uses and Abuses from Yves Gingras, professor and Canada Research Chair in history and sociology of science at the University of Quebec at Montreal. The book was originally published in French in 2014, and while its newest iteration, published by the MIT Press, includes some new content, it’s no friendlier to its subject. Ultimately, Bibliometrics concludes that the trend toward measuring anything and everything is a modern, academic version of “The Emperor’s New Clothes,” in which — quoting Hans Christian Andersen, via Gingras — “the lords of the bedchamber took greater pains than ever to appear holding up a train, although, in reality there was no train to hold.”

Gingras says, “The question is whether university leaders will behave like the emperor and continue to wear each year the ‘new clothes’ provided for them by sellers of university rankings (the scientific value of which most of them admit to be nonexistent), or if they will listen to the voice of reason and have the courage to explain to the few who still think they mean something that they are wrong, reminding them in passing that the first value in a university is truth and rigor, not cynicism and marketing.”

Although some bibliometric methods “are essential to go beyond local and anecdotal perceptions and to map comprehensively the state of research and identify trends at different levels (regional, national and global),” Gingras adds, “the proliferation of invalid indicators can only harm serious evaluations by peers, which are essential to the smooth running of any organization.”

And here is the heart of Gingras’s argument: that colleges and universities are often so eager to proclaim themselves “best in the world” — or region, state, province, etc. — that they don’t take to care to identify “precisely what ‘the best’ means, by whom it is defined and on what basis the measurement is made.” Put another way, he says, paraphrasing another researcher, if the metric is the answer, what is the question?

Without such information, Gingras warns, “the university captains who steer their vessels using bad compasses and ill-calibrated barometers risk sinking first into the storm.” The book doesn’t rule out the use of indicators to “measure” science output or quality, but Gingras says they must first be validated and then interpreted in context.

‘Evaluating Is Not Ranking’

Those looking for a highly technical overview of — or even technical screed against — bibliometrics might best look elsewhere; Bibliometrics is admittedly heavier on opinion than on detailed information science. But Gingras is an expert on bibliometrics, as scientific director of his campus’s Observatory of Science and Technology, which measures science, technology and innovation — and he draws on that expertise throughout the book.

Bibliometrics begins, for example, with a basic history of its subject, from library management in the 1950s and 1960s to science policy in the 1970s to research evaluation in the 1980s. Although evaluation of research is of course at least as old as the emergence of scientific papers hundreds of years ago, Gingras says that new layers — such as the evaluation of grants, graduate programs and research laboratories — were added in the 20th century. And beginning several decades ago, he says, qualitative, peer-led means of evaluations began to be seen as “too subjective to be taken seriously according to positivist conceptions of ‘decision making.’”

While study of publication and citation patterns, “on the proper scale, provides a unique tool for analyzing global dynamics of science over time,” the book says, the “entrenchment” of increasingly (and often ill-defined) quantitative indicators in the formal evaluation of institutions and researchers gives way to their abuses. Negative consequences of a bibliometrics-centered approach to science include the suppression of risk-taking research that might not necessarily get published, and even deliberate gaming of the system — such as paying authors to add new addresses to their papers in order to boost the position of certain institutions on various rankings.

Among other rankings systems, Gingras criticizes Nature Publishing Group’s Nature Index, which ranks countries and institutions on the basis of the papers they publish in what the index defines as high-quality science journals. Because 17 of the 68 journals included in the index are published by the Nature group, the ranking presents some conflict of interest concerns, he says — especially as institutions and researchers may begin to feel pressured into publishing in those journals. More fundamentally, measuring sheer numbers of papers published in a given group of journals — as opposed to, say, citations — “is not proof that the scientific community will find it useful or interesting.”

In a similar point that Gingras makes throughout the book, “evaluating is not ranking.”

A ‘Booming’ Evaluation Market

The intended and unintended consequences of administrative overreliance on indicators used by Academic Analytics, a productivity index and benchmarking firm that aggregates publicly available data from the web, have been cited by faculty members at Rutgers University, for example. The faculty union there has asked the university not to use information from the database in personnel and various other kinds of decisions, and to make faculty members’ profiles available to them.

David Hughes, professor of sociology and president of the American Association of University Professors- and American Federation of Teachers-affiliated faculty union at Rutgers, wrote in an essay for AAUP’s “Academe blog last year, for example, “What consequences might flow from such a warped set of metrics? I can easily imagine department chairs and their faculties attempting to ‘game the system,’ that is to publish in the journals, obtain the grants and collaborate in the ways that Academic Analytics counts. Indeed, a chair might feel obligated, for the good of her department, to push colleagues to compete in the race. If so, then we all lose.”

Hughes continued, “Faculty would put less energy into teaching, service and civic engagement– all activities ignored by the database. Scholarship would narrow to fit the seven quantifiable grooves. We would lose something of the diversity, heterodoxy and innovation that is, again, so characteristic of academia thus far. This firm creates incentives to encourage exactly that kind of decline.”

Faculty members at Rutgers also have cited concerns about errors in their profiles, either overestimating or underestimating their scholarship records. Similar concerns about accuracy after a study comparing faculty members’ curriculum vitae with their system profiles led Georgetown University to drop its subscription to Academic Analytics. In an announcement earlier this month, Robert Groves, provost, said quality coverage of the “scholarly products of those faculty studied are far from perfect.”

Even with perfect coverage, Groves said, “the data have differential value across fields that vary in book versus article production and in their cultural supports for citations of others’ work.” Without adequate coverage, “it seems best for us to seek other ways of comparing Georgetown to other universities.”

In response to such criticisms, Academic Analytics has said that it opposes using its data in faculty personnel decisions, and that it’s helpful to administrators as one tool among many in making decisions.

Other products have emerged as possible alternatives to products such as Academic Analytics, including Lyterati. The latter doesn’t do institutional benchmarking — at least not yet, in part because it’s still building up its client base. But among other capabilities, it includes searchable faculty web profiles with embedded analytics that automatically update when faculty members enter data. Its creators say that it helps researchers and administrators identify connections and experts that might otherwise be hard to find. Dossiers for faculty annual reports, promotion or tenure can be generated from a central database, instead of from a variety of different sources, in different styles.

The program is transparent to faculty members because it relies on their participation to build web profiles. “We are really focused on telling universities, ‘Don’t let external people come in and let people assess how good or productive or respected your faculty is — gather the data yourself and assess it yourself,’” said Rumy Sen, CEO of Entigence Corporation, Lyterati’s parent company.

Dianne Martin, former vice provost for faculty affairs at George Washington University, said she used it as vehicle for professors to file required annual reports and to populate a faculty expert finder web portal. With such data, “we can easily run reports about various aspects of faculty engagement,” such as which professors are engaged in activities related to the university’s strategic plan or international or local community activities, she said. Reports on teaching and research also can be generated on teaching and research for accreditation purposes.

But Lyterati, too, has been controversial on some campuses; Jonathan Rosenberg, Ruth M. Davis Professor of Mathematics at the University of Maryland at College Park, said that his institution dropped it last year after “tremendous faculty uproar.”

While making uniform faculty CVs had “reasonable objectives,” he said, “implementation was a major headache.” Particularly problematic was the electronic conversion process, he said, which ended up behind schedule and became frustrating when data — such as an invited talk for which one couldn’t remember an exact date — didn’t meet program input guidelines.

Gingras said in an interview that the recent debates at Rutgers and Georgetown show the dangers of using a centralized and especially private system “that is a kind of black box that cannot be analyzed to look at the quality of the content — ‘garbage in, garbage out.’”

Companies have identified a moneymaking niche, and some administrators think they can save money using such external systems, he said. But their use poses “grave ethical problems, for one cannot evaluate people on the basis of a proprietary system that cannot be checked for accuracy.”

The reason managers want centralization of faculty data is to “control scientists, who used to be the only ones to evaluate their peers,” Gingras added. “It is a kind of de-skilling of research evaluation. … In this new system, the paper is no [longer] a unit of knowledge and has become an accounting unit.”

Gingras said the push toward using “simplistic” indicators is probably worst in economics and biomedical sciences; history and the other social sciences are somewhat better in that they still have a handle on qualitative, peer-based evaluation. And contrary to beliefs held in some circles, he said, this process has always had some quantitative aspects.

The book criticizes the “booming” evaluation market, describing it is something of a Wild West in which invalid indicators are peddled alongside those with potential value. He says that most indicators, or variables that make up many rankings, are never explicitly tested for their validity before they are used to evaluate institutions and researchers.

Testing Indicators for Validity

To that point, Gingras proposes a set of criteria for testing indicators. The first criterion is adequacy, meaning that it corresponds to the object or concept being evaluated. Sometimes this is relatively simple, he says — for instance, a country’s level of investment in scientific research and development is a pretty good indicator of its research activity intensity.

But things become more complicated when trying to measure “quality” or “impact” of research, as opposed to sheer quantity, he says. For example, the number of citations a given paper receives might best be a measure of “visibility” instead of “quality.” And a bad indicator is one based on the number of Nobel Prize winners associated with a given university, since it measures the quality and quantity of work of an individual researcher, typically over decades — not the current quality of the institution as a whole. Gingras criticizes the international Shanghai Index for making Nobels part of its “pretend” institutional ranking formula.

Gingras’s second indicator criterion is sensitivity to the “inertia” of the object being measured, “since different objects change with more or less difficulty (and rapidity) over time.” Just as a thermometer that gave wildly different readings for the temperature of a room over a short period would be deemed faulty, he says, so should ranking systems that allow institutions to move significantly up or down within a single year.

Universities are like supertankers, he says, and simply can’t change course so quickly. So ranking institutions every year or even every couple of years is folly — bad science — and largely a marketing strategy from producers of such rankings. Gingras applauds the National Research Council, for example, for ranking doctoral departments in each discipline every 10 years, a much more valid interval that might just demonstrate actual change. (The research council ranking gets a lot of additional criticism for its system, however.)

A third, crucial property of indicators is their homogeneity. The number of articles published in leading scientific journals, for example, can measure research output at the national level. But if one were to combine the number of papers with a citation measure — as does the widely known h-index, which measures individual scholars’ productivity and citation impact — indicators become muddled, Gingras says.

“The fundamental problem with such heterogeneous composite indicators is that when they vary, it is impossible to have a clear idea of what the change really means, since it could be due to different factors related to each of its heterogeneous parts,” the book reads. “One should always keep each indicator separate and represent it on a spiderweb diagram, for example, in order to make the various components of the concept being measured.”

A fourth criterion that Gingras says should be implicit but isn’t is that if the value of the concept is higher, the value measured by the indicator should be higher. Some rankings consider the proportion of foreign faculty members as an indicator of success on the world stage, for example, but a 100 percent foreign faculty isn’t necessarily better than 20 percent foreign.

Academic ‘Moneyball’?

The AAUP earlier this year released a statement urging caution against the use of private metrics providers to gather data about faculty research.

Henry Reichman, a professor emeritus of history at California State University at East Bay who helped draft the document as chair of the association’s Committee A on Academic Freedom and Tenure, said faculty bibliometrics were a corollary to the interest in outcomes assessment, in that the goals of each are understandable but the means of measurement are often flawed. Faculty members aren’t necessarily opposed to the use of all bibliometrics, he added, but they should never replace nuanced processes of peer review by subject matter experts.

Bibliometrics in many ways represent a growing gap, or “gulf,” between administrators and faculty members, Reichman added; previously, many administrators were faculty members who eventually would return to the faculty. While that’s still true in many places, he said, university leaders increasingly have been administrators for many years or are drawn from other sectors, where “bottom lines” are much clearer than they are in higher education.

Brad Fenwick, vice president of global and academic research relations for Elsevier — which runs Scopus, a major competitor of Web of Science and the world’s largest citation database for peer-reviewed literature — said that as a former faculty member he understood some of the criticisms of bibliometrics.

No one metric is a sufficient measure of who’s best, he said, and any database must be “sensitive to the uniqueness of disciplines, which have different ways of communicating and sharing their scholarly work.” Elsevier’s answer was to come up with “lots of different ways of doing it, and being very, very transparent” about their processes, he said. That means faculty members have access to their profiles, for example.

Like Reichman, Fenwick said bibliometrics is not an alternative to peer review, but a complement. He compared administrators’ use of bibliometrics to baseball’s increasingly analytic approach made famous in Michael Lewis’s Moneyball: The Art of Winning an Unfair Game, in which human beings use a mix of their expertise, intuition and data to make “marginally better decisions.” And those decisions aren’t always or usually negative, he said; they might mean an administrator is able to funnel additional resources toward an emerging research focus he or she wouldn’t have otherwise noticed.

Cassidy Sugimoto, associate professor of informatics and computing at the University of Indiana at Bloomington and co-editor of Scholarly Metrics Under the Microscope: From Citation Analysis to Academic Auditing, said criticism of bibliometrics for evaluating scholars and scholarship is nothing new, and the field has adjusted to them over time. Issues of explicit malpractice — such as citation stacking and citation “cartels” — are addressed by suppressing data for such individuals and journals in citation indicators, for example, she said. And various distortions in interpretations, such as those caused by the “skewness” of citation indicators, wide variation in discipline and scholars’ age, have been mitigated by the adoption of more sophisticated normalizations.

“Just as with any other field of practice, scientometrics has self-corrected via organized skepticism, and bibliometrics continues to offer a productive lens to answer many questions about the structure, performance and trajectory of science,” she said.

Residual issues arising from the use of indicators for evaluation purposes are “as much sociological as scientometric problems, and are often caused not by the metric itself but in the application and interpretation of the metric,” Sugimoto added via email. “I would not call, therefore, for the cessation of the use of metrics, but rather to apply and interpret them more appropriately — at the right levels of analysis — to provide rigorous analyses of science.”

Summing up his own work, Gingras said that bibliometrics “work best at aggregate levels to indicate trends when indicators are well defined. At the level of individual evaluation they fluctuate too much to replace the usual peer review by experts who know the field and thus know what a productive scientist is.”

New Books About Higher Education
Editorial Tags: 
Image Source: 
Wikipedia
Image Caption: 
Emperor’s New Clothes monument in Odense, Denmark
Is this breaking news?: 

“Since the first decade of the new millennium, the words ranking, evaluation, metrics, h-index and impact factors have wreaked havoc in the world of higher education and research.”

So begins a new English edition of Bibliometrics and Research Evaluation: Uses and Abuses from Yves Gingras, professor and Canada Research Chair in history and sociology of science at the University of Quebec at Montreal. The book was originally published in French in 2014, and while its newest iteration, published by the MIT Press, includes some new content, it’s no friendlier to its subject. Ultimately, Bibliometrics concludes that the trend toward measuring anything and everything is a modern, academic version of “The Emperor’s New Clothes,” in which -- quoting Hans Christian Andersen, via Gingras -- “the lords of the bedchamber took greater pains than ever to appear holding up a train, although, in reality there was no train to hold.”

Gingras says, “The question is whether university leaders will behave like the emperor and continue to wear each year the ‘new clothes’ provided for them by sellers of university rankings (the scientific value of which most of them admit to be nonexistent), or if they will listen to the voice of reason and have the courage to explain to the few who still think they mean something that they are wrong, reminding them in passing that the first value in a university is truth and rigor, not cynicism and marketing.”

Although some bibliometric methods “are essential to go beyond local and anecdotal perceptions and to map comprehensively the state of research and identify trends at different levels (regional, national and global),” Gingras adds, “the proliferation of invalid indicators can only harm serious evaluations by peers, which are essential to the smooth running of any organization.”

And here is the heart of Gingras’s argument: that colleges and universities are often so eager to proclaim themselves “best in the world” -- or region, state, province, etc. -- that they don’t take to care to identify “precisely what ‘the best’ means, by whom it is defined and on what basis the measurement is made.” Put another way, he says, paraphrasing another researcher, if the metric is the answer, what is the question?

Without such information, Gingras warns, “the university captains who steer their vessels using bad compasses and ill-calibrated barometers risk sinking first into the storm.” The book doesn’t rule out the use of indicators to “measure” science output or quality, but Gingras says they must first be validated and then interpreted in context.

‘Evaluating Is Not Ranking’

Those looking for a highly technical overview of -- or even technical screed against -- bibliometrics might best look elsewhere; Bibliometrics is admittedly heavier on opinion than on detailed information science. But Gingras is an expert on bibliometrics, as scientific director of his campus’s Observatory of Science and Technology, which measures science, technology and innovation -- and he draws on that expertise throughout the book.

Bibliometrics begins, for example, with a basic history of its subject, from library management in the 1950s and 1960s to science policy in the 1970s to research evaluation in the 1980s. Although evaluation of research is of course at least as old as the emergence of scientific papers hundreds of years ago, Gingras says that new layers -- such as the evaluation of grants, graduate programs and research laboratories -- were added in the 20th century. And beginning several decades ago, he says, qualitative, peer-led means of evaluations began to be seen as “too subjective to be taken seriously according to positivist conceptions of ‘decision making.’”

While study of publication and citation patterns, “on the proper scale, provides a unique tool for analyzing global dynamics of science over time,” the book says, the “entrenchment” of increasingly (and often ill-defined) quantitative indicators in the formal evaluation of institutions and researchers gives way to their abuses. Negative consequences of a bibliometrics-centered approach to science include the suppression of risk-taking research that might not necessarily get published, and even deliberate gaming of the system -- such as paying authors to add new addresses to their papers in order to boost the position of certain institutions on various rankings.

Among other rankings systems, Gingras criticizes Nature Publishing Group’s Nature Index, which ranks countries and institutions on the basis of the papers they publish in what the index defines as high-quality science journals. Because 17 of the 68 journals included in the index are published by the Nature group, the ranking presents some conflict of interest concerns, he says -- especially as institutions and researchers may begin to feel pressured into publishing in those journals. More fundamentally, measuring sheer numbers of papers published in a given group of journals -- as opposed to, say, citations -- “is not proof that the scientific community will find it useful or interesting.”

In a similar point that Gingras makes throughout the book, “evaluating is not ranking.”

A ‘Booming’ Evaluation Market

The intended and unintended consequences of administrative overreliance on indicators used by Academic Analytics, a productivity index and benchmarking firm that aggregates publicly available data from the web, have been cited by faculty members at Rutgers University, for example. The faculty union there has asked the university not to use information from the database in personnel and various other kinds of decisions, and to make faculty members’ profiles available to them.

David Hughes, professor of sociology and president of the American Association of University Professors- and American Federation of Teachers-affiliated faculty union at Rutgers, wrote in an essay for AAUP’s “Academe blog last year, for example, “What consequences might flow from such a warped set of metrics? I can easily imagine department chairs and their faculties attempting to ‘game the system,’ that is to publish in the journals, obtain the grants and collaborate in the ways that Academic Analytics counts. Indeed, a chair might feel obligated, for the good of her department, to push colleagues to compete in the race. If so, then we all lose.”

Hughes continued, “Faculty would put less energy into teaching, service and civic engagement-- all activities ignored by the database. Scholarship would narrow to fit the seven quantifiable grooves. We would lose something of the diversity, heterodoxy and innovation that is, again, so characteristic of academia thus far. This firm creates incentives to encourage exactly that kind of decline.”

Faculty members at Rutgers also have cited concerns about errors in their profiles, either overestimating or underestimating their scholarship records. Similar concerns about accuracy after a study comparing faculty members’ curriculum vitae with their system profiles led Georgetown University to drop its subscription to Academic Analytics. In an announcement earlier this month, Robert Groves, provost, said quality coverage of the “scholarly products of those faculty studied are far from perfect.”

Even with perfect coverage, Groves said, “the data have differential value across fields that vary in book versus article production and in their cultural supports for citations of others’ work.” Without adequate coverage, “it seems best for us to seek other ways of comparing Georgetown to other universities.”

In response to such criticisms, Academic Analytics has said that it opposes using its data in faculty personnel decisions, and that it’s helpful to administrators as one tool among many in making decisions.

Other products have emerged as possible alternatives to products such as Academic Analytics, including Lyterati. The latter doesn’t do institutional benchmarking -- at least not yet, in part because it’s still building up its client base. But among other capabilities, it includes searchable faculty web profiles with embedded analytics that automatically update when faculty members enter data. Its creators say that it helps researchers and administrators identify connections and experts that might otherwise be hard to find. Dossiers for faculty annual reports, promotion or tenure can be generated from a central database, instead of from a variety of different sources, in different styles.

The program is transparent to faculty members because it relies on their participation to build web profiles. “We are really focused on telling universities, ‘Don’t let external people come in and let people assess how good or productive or respected your faculty is -- gather the data yourself and assess it yourself,’” said Rumy Sen, CEO of Entigence Corporation, Lyterati’s parent company.

Dianne Martin, former vice provost for faculty affairs at George Washington University, said she used it as vehicle for professors to file required annual reports and to populate a faculty expert finder web portal. With such data, "we can easily run reports about various aspects of faculty engagement," such as which professors are engaged in activities related to the university's strategic plan or international or local community activities, she said. Reports on teaching and research also can be generated on teaching and research for accreditation purposes.

But Lyterati, too, has been controversial on some campuses; Jonathan Rosenberg, Ruth M. Davis Professor of Mathematics at the University of Maryland at College Park, said that his institution dropped it last year after “tremendous faculty uproar.”

While making uniform faculty CVs had “reasonable objectives,” he said, “implementation was a major headache.” Particularly problematic was the electronic conversion process, he said, which ended up behind schedule and became frustrating when data -- such as an invited talk for which one couldn’t remember an exact date -- didn’t meet program input guidelines.

Gingras said in an interview that the recent debates at Rutgers and Georgetown show the dangers of using a centralized and especially private system “that is a kind of black box that cannot be analyzed to look at the quality of the content -- ‘garbage in, garbage out.’”

Companies have identified a moneymaking niche, and some administrators think they can save money using such external systems, he said. But their use poses “grave ethical problems, for one cannot evaluate people on the basis of a proprietary system that cannot be checked for accuracy.”

The reason managers want centralization of faculty data is to “control scientists, who used to be the only ones to evaluate their peers,” Gingras added. “It is a kind of de-skilling of research evaluation. … In this new system, the paper is no [longer] a unit of knowledge and has become an accounting unit.”

Gingras said the push toward using “simplistic” indicators is probably worst in economics and biomedical sciences; history and the other social sciences are somewhat better in that they still have a handle on qualitative, peer-based evaluation. And contrary to beliefs held in some circles, he said, this process has always had some quantitative aspects.

The book criticizes the “booming” evaluation market, describing it is something of a Wild West in which invalid indicators are peddled alongside those with potential value. He says that most indicators, or variables that make up many rankings, are never explicitly tested for their validity before they are used to evaluate institutions and researchers.

Testing Indicators for Validity

To that point, Gingras proposes a set of criteria for testing indicators. The first criterion is adequacy, meaning that it corresponds to the object or concept being evaluated. Sometimes this is relatively simple, he says -- for instance, a country’s level of investment in scientific research and development is a pretty good indicator of its research activity intensity.

But things become more complicated when trying to measure “quality” or “impact” of research, as opposed to sheer quantity, he says. For example, the number of citations a given paper receives might best be a measure of “visibility” instead of “quality.” And a bad indicator is one based on the number of Nobel Prize winners associated with a given university, since it measures the quality and quantity of work of an individual researcher, typically over decades -- not the current quality of the institution as a whole. Gingras criticizes the international Shanghai Index for making Nobels part of its “pretend” institutional ranking formula.

Gingras’s second indicator criterion is sensitivity to the “inertia” of the object being measured, “since different objects change with more or less difficulty (and rapidity) over time.” Just as a thermometer that gave wildly different readings for the temperature of a room over a short period would be deemed faulty, he says, so should ranking systems that allow institutions to move significantly up or down within a single year.

Universities are like supertankers, he says, and simply can’t change course so quickly. So ranking institutions every year or even every couple of years is folly -- bad science -- and largely a marketing strategy from producers of such rankings. Gingras applauds the National Research Council, for example, for ranking doctoral departments in each discipline every 10 years, a much more valid interval that might just demonstrate actual change. (The research council ranking gets a lot of additional criticism for its system, however.)

A third, crucial property of indicators is their homogeneity. The number of articles published in leading scientific journals, for example, can measure research output at the national level. But if one were to combine the number of papers with a citation measure -- as does the widely known h-index, which measures individual scholars’ productivity and citation impact -- indicators become muddled, Gingras says.

“The fundamental problem with such heterogeneous composite indicators is that when they vary, it is impossible to have a clear idea of what the change really means, since it could be due to different factors related to each of its heterogeneous parts,” the book reads. “One should always keep each indicator separate and represent it on a spiderweb diagram, for example, in order to make the various components of the concept being measured.”

A fourth criterion that Gingras says should be implicit but isn’t is that if the value of the concept is higher, the value measured by the indicator should be higher. Some rankings consider the proportion of foreign faculty members as an indicator of success on the world stage, for example, but a 100 percent foreign faculty isn’t necessarily better than 20 percent foreign.

Academic ‘Moneyball’?

The AAUP earlier this year released a statement urging caution against the use of private metrics providers to gather data about faculty research.

Henry Reichman, a professor emeritus of history at California State University at East Bay who helped draft the document as chair of the association’s Committee A on Academic Freedom and Tenure, said faculty bibliometrics were a corollary to the interest in outcomes assessment, in that the goals of each are understandable but the means of measurement are often flawed. Faculty members aren’t necessarily opposed to the use of all bibliometrics, he added, but they should never replace nuanced processes of peer review by subject matter experts.

Bibliometrics in many ways represent a growing gap, or “gulf,” between administrators and faculty members, Reichman added; previously, many administrators were faculty members who eventually would return to the faculty. While that’s still true in many places, he said, university leaders increasingly have been administrators for many years or are drawn from other sectors, where “bottom lines” are much clearer than they are in higher education.

Brad Fenwick, vice president of global and academic research relations for Elsevier -- which runs Scopus, a major competitor of Web of Science and the world’s largest citation database for peer-reviewed literature -- said that as a former faculty member he understood some of the criticisms of bibliometrics.

No one metric is a sufficient measure of who’s best, he said, and any database must be “sensitive to the uniqueness of disciplines, which have different ways of communicating and sharing their scholarly work.” Elsevier’s answer was to come up with “lots of different ways of doing it, and being very, very transparent” about their processes, he said. That means faculty members have access to their profiles, for example.

Like Reichman, Fenwick said bibliometrics is not an alternative to peer review, but a complement. He compared administrators’ use of bibliometrics to baseball’s increasingly analytic approach made famous in Michael Lewis’s Moneyball: The Art of Winning an Unfair Game, in which human beings use a mix of their expertise, intuition and data to make “marginally better decisions.” And those decisions aren’t always or usually negative, he said; they might mean an administrator is able to funnel additional resources toward an emerging research focus he or she wouldn’t have otherwise noticed.

Cassidy Sugimoto, associate professor of informatics and computing at the University of Indiana at Bloomington and co-editor of Scholarly Metrics Under the Microscope: From Citation Analysis to Academic Auditing, said criticism of bibliometrics for evaluating scholars and scholarship is nothing new, and the field has adjusted to them over time. Issues of explicit malpractice -- such as citation stacking and citation “cartels” -- are addressed by suppressing data for such individuals and journals in citation indicators, for example, she said. And various distortions in interpretations, such as those caused by the “skewness” of citation indicators, wide variation in discipline and scholars’ age, have been mitigated by the adoption of more sophisticated normalizations.

“Just as with any other field of practice, scientometrics has self-corrected via organized skepticism, and bibliometrics continues to offer a productive lens to answer many questions about the structure, performance and trajectory of science,” she said.

Residual issues arising from the use of indicators for evaluation purposes are “as much sociological as scientometric problems, and are often caused not by the metric itself but in the application and interpretation of the metric,” Sugimoto added via email. “I would not call, therefore, for the cessation of the use of metrics, but rather to apply and interpret them more appropriately -- at the right levels of analysis -- to provide rigorous analyses of science.”

Summing up his own work, Gingras said that bibliometrics “work best at aggregate levels to indicate trends when indicators are well defined. At the level of individual evaluation they fluctuate too much to replace the usual peer review by experts who know the field and thus know what a productive scientist is.”

New Books About Higher Education
Editorial Tags: 
Image Source: 
Wikipedia
Image Caption: 
Emperor’s New Clothes monument in Odense, Denmark
Is this breaking news?: 

At Knight Commission meeting, worries over spending and stability of Football Bowl Subdivision

WASHINGTON — In December, William E. Kirwan will retire from the Knight Commission on Intercollegiate Athletics after serving a decade as the group’s chair. Following his final meeting on Monday, Kirwan shared a grim prediction for the future of the Football Bowl Subdivision, the most competitive level of the National Collegiate Athletic Association.

The “out of control expenditures” seen among the division’s top programs, Kirwan said, are creating a financial divide that will eventually force out all but the wealthiest institutions. If the prediction holds, the University of Idaho’s decision to leave the FBS in April would be only the beginning of an eventual exodus.

“The discrepancy between the haves and have-nots is just growing so significantly, and I think we will definitely see more University of Idahos in the future,” Kirwan said. “I think as this have and have-not gap gets wider and wider, more and more schools are going to drop out of the FBS and that’s when a crisis is going to start to emerge, because the model of FBS is dependent on 120 schools playing and we’re reaching a point where that’s just not going to be possible.”

The instability and heavy spending of the Football Bowl Subdivision was a common theme during the commission’s meeting here Monday.

Some at the meeting, including Kirwan, said they would support an antitrust exemption for the NCAA that would allow the association to cap coaching salaries and other athletic spending. Sandy Barbour, athletics director at Pennsylvania State University, said college sports’ current level of spending was “unsustainable.” Scott Cowen, the former president of Tulane University, also called the current model “not sustainable” and blamed the Power Five leagues — the 65 wealthiest programs, which gained the ability to create their own rules in 2014 — for driving the increase in spending.

In 2005, revenue generated by the Power Five leagues was $570 million. According to the Knight Commission, that revenue will be $2.8 billion by 2020.

During the meeting, the commission released a report that examined the various sources from which Division I institutions receive their revenue and how the money is spent. For the FBS, about 8 percent of athletic budget revenue comes from student fees, with another 11 percent coming from institutional and government support. At the Football Championship Subdivision level, student fees account for 26 percent of revenues and 43 percent of the budget is provided through institutional and government support.

For colleges that have no football program, the athletic department is mostly subsidized, with 41 percent of the funds coming from student fees and 35 percent from institutional support.

During Monday’s meeting, Jeff Bourne, athletics director at James Madison University, said $34 million of the university’s $47 million sports budget is funded by student fees. Bourne said the university is currently examining whether it should join the FBS. In recent years, other institutions that heavily subsidize their athletic budgets with student fees and institutional support have discussed making the opposite move: from the FBS to the FCS, or eliminating football completely.

  • In 2014, the University of Alabama at Birmingham became the first institution in nearly 20 years to shut down its big-time Division I football program. Citing rising costs and the growing stratification of college sports, Ray Watts, the university’s president, said killing the program would help save Birmingham $50 million over the next decade.
  • Last year, some faculty and students at the University of Akron urged the university to decrease football spending by dropping to the FCS. A $60 million deficit had prompted the university to announce staff layoffs and the elimination of its baseball team. Meanwhile, the university was spending $8 million per year to subsidize a football team with the lowest attendance in the Football Bowl Subdivision.
  • Earlier this year, the Faculty Senate and student government at Eastern Michigan University told university administrators that its heavily subsidized FBS football team should leave for the FCS.

In the end, none of these programs dropped football or moved down to the FCS, however. Alabama Birmingham’s decision created an intense backlash on campus, setting off a tumultuous six months that eventually resulted in a complete reversal of the decision. Akron’s president said that moving to the FCS was “not on the table.” When faculty members and students made their suggestion at Eastern Michigan, the university stated that it had “absolutely no plans to eliminate football or move into any other division or conference.”

Chuck Staben, president of the University of Idaho, said in an interview Monday that if the sort of change Kirwan predicts is coming, he hasn’t seen many signs that it will be soon. Since announcing Idaho’s move to the FCS in April, other institutions “have not been knocking” on the president’s door, he said.

Idaho’s decision was made easier by the Sun Belt Conference choosing not to renew the university’s membership earlier this year. For other institutions with struggling football programs, Staben said, having a place in a conference may mean it’s not worth making the transition. Even for Idaho, the move hasn’t been without complications. When the decision was announced in April, several donors expressed anger and said they would no longer donate to the university.

Interest in “Idaho athletics would virtually disappear with the move, along with any potential for financial support,” one booster warned, adding that contributions to the university would decline or “cease entirely.” As it turns out, some boosters were true to their word.

“Some of our donors got ticked off, and they pulled about a half a million dollars in fund-raising,” Staben said, adding that the university was already facing a $500,000 deficit before the move to the FCS.

Staben said he still believes the university made the right decision, and that joining FCS will help alleviate the lack of funding, as it means the football program will be able to spend far less on travel and on the salaries of coaching staff. Whether other programs will determine that such a move makes sense remains to be seen.

“So far, I don’t see anybody lining up to do it,” Staben said.

Editorial Tags: 
Image Source: 
Knight Commission
Is this breaking news?: 

WASHINGTON -- In December, William E. Kirwan will retire from the Knight Commission on Intercollegiate Athletics after serving a decade as the group’s chair. Following his final meeting on Monday, Kirwan shared a grim prediction for the future of the Football Bowl Subdivision, the most competitive level of the National Collegiate Athletic Association.

The “out of control expenditures” seen among the division’s top programs, Kirwan said, are creating a financial divide that will eventually force out all but the wealthiest institutions. If the prediction holds, the University of Idaho’s decision to leave the FBS in April would be only the beginning of an eventual exodus.

“The discrepancy between the haves and have-nots is just growing so significantly, and I think we will definitely see more University of Idahos in the future,” Kirwan said. “I think as this have and have-not gap gets wider and wider, more and more schools are going to drop out of the FBS and that’s when a crisis is going to start to emerge, because the model of FBS is dependent on 120 schools playing and we’re reaching a point where that’s just not going to be possible.”

The instability and heavy spending of the Football Bowl Subdivision was a common theme during the commission’s meeting here Monday.

Some at the meeting, including Kirwan, said they would support an antitrust exemption for the NCAA that would allow the association to cap coaching salaries and other athletic spending. Sandy Barbour, athletics director at Pennsylvania State University, said college sports' current level of spending was "unsustainable." Scott Cowen, the former president of Tulane University, also called the current model “not sustainable" and blamed the Power Five leagues -- the 65 wealthiest programs, which gained the ability to create their own rules in 2014 -- for driving the increase in spending.

In 2005, revenue generated by the Power Five leagues was $570 million. According to the Knight Commission, that revenue will be $2.8 billion by 2020.

During the meeting, the commission released a report that examined the various sources from which Division I institutions receive their revenue and how the money is spent. For the FBS, about 8 percent of athletic budget revenue comes from student fees, with another 11 percent coming from institutional and government support. At the Football Championship Subdivision level, student fees account for 26 percent of revenues and 43 percent of the budget is provided through institutional and government support.

For colleges that have no football program, the athletic department is mostly subsidized, with 41 percent of the funds coming from student fees and 35 percent from institutional support.

During Monday’s meeting, Jeff Bourne, athletics director at James Madison University, said $34 million of the university’s $47 million sports budget is funded by student fees. Bourne said the university is currently examining whether it should join the FBS. In recent years, other institutions that heavily subsidize their athletic budgets with student fees and institutional support have discussed making the opposite move: from the FBS to the FCS, or eliminating football completely.

  • In 2014, the University of Alabama at Birmingham became the first institution in nearly 20 years to shut down its big-time Division I football program. Citing rising costs and the growing stratification of college sports, Ray Watts, the university’s president, said killing the program would help save Birmingham $50 million over the next decade.
  • Last year, some faculty and students at the University of Akron urged the university to decrease football spending by dropping to the FCS. A $60 million deficit had prompted the university to announce staff layoffs and the elimination of its baseball team. Meanwhile, the university was spending $8 million per year to subsidize a football team with the lowest attendance in the Football Bowl Subdivision.
  • Earlier this year, the Faculty Senate and student government at Eastern Michigan University told university administrators that its heavily subsidized FBS football team should leave for the FCS.

In the end, none of these programs dropped football or moved down to the FCS, however. Alabama Birmingham’s decision created an intense backlash on campus, setting off a tumultuous six months that eventually resulted in a complete reversal of the decision. Akron’s president said that moving to the FCS was “not on the table.” When faculty members and students made their suggestion at Eastern Michigan, the university stated that it had “absolutely no plans to eliminate football or move into any other division or conference.”

Chuck Staben, president of the University of Idaho, said in an interview Monday that if the sort of change Kirwan predicts is coming, he hasn’t seen many signs that it will be soon. Since announcing Idaho’s move to the FCS in April, other institutions “have not been knocking” on the president’s door, he said.

Idaho’s decision was made easier by the Sun Belt Conference choosing not to renew the university’s membership earlier this year. For other institutions with struggling football programs, Staben said, having a place in a conference may mean it’s not worth making the transition. Even for Idaho, the move hasn’t been without complications. When the decision was announced in April, several donors expressed anger and said they would no longer donate to the university.

Interest in “Idaho athletics would virtually disappear with the move, along with any potential for financial support,” one booster warned, adding that contributions to the university would decline or “cease entirely.” As it turns out, some boosters were true to their word.

“Some of our donors got ticked off, and they pulled about a half a million dollars in fund-raising,” Staben said, adding that the university was already facing a $500,000 deficit before the move to the FCS.

Staben said he still believes the university made the right decision, and that joining FCS will help alleviate the lack of funding, as it means the football program will be able to spend far less on travel and on the salaries of coaching staff. Whether other programs will determine that such a move makes sense remains to be seen.

“So far, I don’t see anybody lining up to do it,” Staben said.

Editorial Tags: 
Image Source: 
Knight Commission
Is this breaking news?: 

AAUP to Investigate Community College of Aurora

The AAUP will formally investigate the dismissal of a part-time (adjunct) professor at the Community College of Aurora in Colorado. The investigation will examine the facts of the case to determine whether widely accepted principles of academic freedom, necessary for educational quality, have been violated.

The AAUP will formally investigate the dismissal of a part-time (adjunct) professor at the Community College of Aurora in Colorado. The investigation will examine the facts of the case to determine whether widely accepted principles of academic freedom, necessary for educational quality, have been violated.

AAUP to Investigate Community College of Aurora

The AAUP will formally investigate the dismissal of a part-time (adjunct) professor at the Community College of Aurora in Colorado. The investigation will examine the facts of the case to determine whether widely accepted principles of academic freedom, necessary for educational quality, have been violated.

The AAUP will formally investigate the dismissal of a part-time (adjunct) professor at the Community College of Aurora in Colorado. The investigation will examine the facts of the case to determine whether widely accepted principles of academic freedom, necessary for educational quality, have been violated.

Doubts About Data: 2016 Survey of Faculty Attitudes on Technology

Highlights: Questions about who stands to benefit from data-driven assessments, lack of concern about cyberattacks, dissatisfaction with the scholarly publishing landscape, continued skepticism about online education quality.
Ad Keyword: facultyte…

Highlights: Questions about who stands to benefit from data-driven assessments, lack of concern about cyberattacks, dissatisfaction with the scholarly publishing landscape, continued skepticism about online education quality.

Ad Keyword: 
facultytech2016
Is this breaking news?: 

At Alabama and Greenville, a backlash to anthem protests by black students

Black students nationwide have been inspired by and joined the protest started by Colin Kaepernick of the National Football League in which people decline to stand during the national anthem before athletic contests.

The protests continue to spread, and they are facing a backlash at some campuses. Some of the backlash consists of people making a case for standing for the anthem. But some of the backlash — especially on social media — is coming in the form of ugly comments directed against black students.

At a football game at the University of Alabama at Tuscaloosa on Saturday, a few dozen black students sat together and remained seated for the national anthem. (The football team at Alabama doesn’t appear on the field until after pregame activities such as the anthem, and so has not been involved in the protest.)

Saturday’s protest was the second at an Alabama football game, and students announced their plans in advance, using the hashtag #bamasits to organize and explain their views.

#bamasits pic.twitter.com/qQzq5FvLYg

— cleopatra ☥ (@aminafromthesix) October 22, 2016

As at other colleges and universities where students have protested, students cited police violence against young black people and the racism that remains present in American society.

At the game on Saturday, many white fans made a point of bringing and displaying U.S. flags, of placing their hands over their hearts during the anthem, and showing great enthusiasm for the anthem. (Some people do this during any game, but the patriotic activity was much more pronounced than during typical games.)

Many of those opposed to the protest organized around a hashtag of #bamastands.

On social media, many comments directed at the black students went beyond simply disagreeing with them.

Fuck these college assholes. They don’t have a fucking clue how the world operates. “Mistreatment of blacks.” Kiss my ass! #bamasits

— Bud Davis (@slumpbuster2001) October 22, 2016

#bamasits is a bunch of whiney uneducated children who are puppets being used by the media and liberals to push a dividing agenda.

— Aaron Williams (@Worship1AMW) October 22, 2016

Other comments criticized the Black Lives Matter movement that many of the protesting students support. And others wrote that part of being at the University of Alabama is supporting the flag and not protesting in this way.

Others expressed support for the protest, and some said that they felt new pride for Alabama after seeing these students protest. Many defending them noted that the protest was entirely peaceful and did not interfere with the ability of other fans to stand during the anthem. Many wrote that the responses they saw online and in person point to the reasons they supported the protest. “Today I was brought to tears as my peers bombarded our peaceful protest …. [I don’t know] why the hate is so deep,” one student wrote on Twitter.

The Facebook page on which Alabama students trade football tickets featured many divisive remarks about the protest — and one person who posted what was deemed a criminal threat against black students has since been arrested.

The Crimson White, the student newspaper at Alabama, published an editorial (after the first protest but before Saturday’s) that defended the right of students to sit during the anthem, and said that all should be concerned about a lack of respect shown to those who protest.

“It can be difficult to adjust to new perspectives; when faced with new points of view, some react explosively and insensitively because they don’t know how to accommodate a set of ideas that differs so heavily from their own,” the editorial said. “But this extends beyond gathering new perspectives. This is about deeply ingrained, institutionalized racism that has now pervaded even the simplest of American exercises — peaceful protest. We’re never going to agree on everything. But the least we can do is understand and respect one another.”

A statement from the university said that the protests have First Amendment protection. “The university supports the rights and ability of students to protest in a way that does not infringe on the freedoms of others,” the statement said. “The UA campus community should continue to be accepting, tolerant, open to differing ideas and opinions, and should treat one another with respect despite any differences. Not everyone will agree with opinions expressed by other individuals or groups, but these conflicting opinions and views are almost always protected by the First Amendment.”

Veterans on the Field at Greenville

At Greenville College, in Illinois, some football players in recent weeks have taken a knee during the national anthem, prompting discussion at the college and in the surrounding areas.

KSDK News reported that on Saturday, members of a local Veterans of Foreign Wars lodge — without permission from the college — marched to the center of the football field before the game. The football team was then told by college officials to go back to the locker room, although some students remained on the field.

Diversity
Editorial Tags: 
Image Source: 
Twitter
Image Caption: 
Protesting students and those protesting the protest at Alabama
Is this breaking news?: 

Black students nationwide have been inspired by and joined the protest started by Colin Kaepernick of the National Football League in which people decline to stand during the national anthem before athletic contests.

The protests continue to spread, and they are facing a backlash at some campuses. Some of the backlash consists of people making a case for standing for the anthem. But some of the backlash -- especially on social media -- is coming in the form of ugly comments directed against black students.

At a football game at the University of Alabama at Tuscaloosa on Saturday, a few dozen black students sat together and remained seated for the national anthem. (The football team at Alabama doesn't appear on the field until after pregame activities such as the anthem, and so has not been involved in the protest.)

Saturday's protest was the second at an Alabama football game, and students announced their plans in advance, using the hashtag #bamasits to organize and explain their views.

As at other colleges and universities where students have protested, students cited police violence against young black people and the racism that remains present in American society.

At the game on Saturday, many white fans made a point of bringing and displaying U.S. flags, of placing their hands over their hearts during the anthem, and showing great enthusiasm for the anthem. (Some people do this during any game, but the patriotic activity was much more pronounced than during typical games.)

Many of those opposed to the protest organized around a hashtag of #bamastands.

On social media, many comments directed at the black students went beyond simply disagreeing with them.

Other comments criticized the Black Lives Matter movement that many of the protesting students support. And others wrote that part of being at the University of Alabama is supporting the flag and not protesting in this way.

Others expressed support for the protest, and some said that they felt new pride for Alabama after seeing these students protest. Many defending them noted that the protest was entirely peaceful and did not interfere with the ability of other fans to stand during the anthem. Many wrote that the responses they saw online and in person point to the reasons they supported the protest. "Today I was brought to tears as my peers bombarded our peaceful protest …. [I don't know] why the hate is so deep," one student wrote on Twitter.

The Facebook page on which Alabama students trade football tickets featured many divisive remarks about the protest -- and one person who posted what was deemed a criminal threat against black students has since been arrested.

The Crimson White, the student newspaper at Alabama, published an editorial (after the first protest but before Saturday's) that defended the right of students to sit during the anthem, and said that all should be concerned about a lack of respect shown to those who protest.

"It can be difficult to adjust to new perspectives; when faced with new points of view, some react explosively and insensitively because they don’t know how to accommodate a set of ideas that differs so heavily from their own," the editorial said. "But this extends beyond gathering new perspectives. This is about deeply ingrained, institutionalized racism that has now pervaded even the simplest of American exercises -- peaceful protest. We’re never going to agree on everything. But the least we can do is understand and respect one another."

A statement from the university said that the protests have First Amendment protection. "The university supports the rights and ability of students to protest in a way that does not infringe on the freedoms of others," the statement said. "The UA campus community should continue to be accepting, tolerant, open to differing ideas and opinions, and should treat one another with respect despite any differences. Not everyone will agree with opinions expressed by other individuals or groups, but these conflicting opinions and views are almost always protected by the First Amendment."

Veterans on the Field at Greenville

At Greenville College, in Illinois, some football players in recent weeks have taken a knee during the national anthem, prompting discussion at the college and in the surrounding areas.

KSDK News reported that on Saturday, members of a local Veterans of Foreign Wars lodge -- without permission from the college -- marched to the center of the football field before the game. The football team was then told by college officials to go back to the locker room, although some students remained on the field.

Diversity
Editorial Tags: 
Image Source: 
Twitter
Image Caption: 
Protesting students and those protesting the protest at Alabama
Is this breaking news?: 

Federal regulators: university subsidies for grad student health insurance can remain

Federal regulators released guidance on graduate student health insurance subsidies Friday that should provide reassurance to universities considering whether they will still offer the subsidies. The guidance likely will be viewed as great news by many…

Federal regulators released guidance on graduate student health insurance subsidies Friday that should provide reassurance to universities considering whether they will still offer the subsidies. The guidance likely will be viewed as great news by many graduate students.

An Internal Revenue Service interpretation of the Affordable Care Act barred large employers from subsidizing employees’ purchase of health insurance on the individual market -- a view the agency applied even to student health insurance plans negotiated by a university with insurers. That interpretation had left many large public universities scrambling over the last year to identify alternative options to provide affordable insurance to graduate workers.

Many advocates for graduate students and leaders of universities said that the IRS interpretation ignored the many ways in which universities subsidizing graduate student health insurance are not typical of the kinds of employers that the health care law sought to regulate.

The government in February said agencies would wait until the 2017-18 academic year to enforce that interpretation. The new guidance released Friday by the Departments of Treasury, Labor, and Health and Human Services indicates they will extend that nonenforcement indefinitely.

“We’re very appreciative of what regulators and the administration did today,” said Steven Bloom, director of government relations at the American Council on Education, which advocates on behalf of public and private colleges and universities. “We think it solves an immediate problem that many schools were having a difficult time figuring out what to do for the upcoming year.”

Bloom said ACE doesn’t consider the issue to be entirely resolved but for now said colleges and universities can continue what they have been doing to offer subsidized health insurance to graduate students.

The Kansas Board of Regents announced this month that public universities in the state would cease offering those subsidies next year in response to the IRS interpretation of the Affordable Care Act. Bloom said it’s possible some member universities that were considering the same decision as Kansas may reconsider in light of the additional guidance Friday.

Breeze Richardson, a spokeswoman for the Kansas Board of Regents, said there was now no reason for state institutions to stop offering subsidies currently in place.

“We are extremely pleased about this latest decision and hope that the federal agencies involved will make it a permanent one,” she said in an email.

In 2015, the University of Missouri announced shortly before the fall semester was to begin that it would stop offering subsidies for health insurance to its graduate students. In the face of protests and after the involvement of Senator Claire McCaskill, a Missouri Democrat, the university changed course and reinstated the subsidies indefinitely.

Kristofferson Culmer, the president and CEO of the National Association of Graduate-Professional Students, said the group was relieved to see the extension granted indefinitely for universities.

“A lot of the options they were looking into were ultimately going to raise costs for individual students as well for the universities,” he said.

Editorial Tags: 
Is this breaking news?: