Author Archives: robinnoonan

“Still Alice” reminds us to remember the challenges facing the caregiver

Guest post by Laura Wayman,  The Dementia Whisperer

In the film Still Alice, Alice Howland is a linguistics professor who endures, at the unusually young age of 50, dementia symptoms caused by a form of young onset Alzheimer’s that runs in her family. Although this type of Alzheimer’s is rare, the dementia symptoms are the same as the more common form of the disease with which more than 5 million older Americans are living.

This movie poignantly portrays Alice as she struggles with the painstaking loss of herself, including her career, individuality, cognition, and connection to the world around her with disturbing swiftness.

Watching the movie, I was primarily transfixed by the impact Alzheimer’s had on those around her as Alice faded into the darkness of dementia, specifically the effect on her three grown-up children (also at risk of the disease, which is 100% passable to offspring) and the emotional devastation experienced by her grieving husband.

Of course, every family and situation is different. If you are a caregiver, you may have been thrust into this caregiving role unexpectedly—without any training or even any encouragement. Perhaps the care is being provided at home, with or without other family or professional in-home support. Or maybe the care is provided in a specialized memory care unit, an assisted living environment, or a skilled nursing facility. Although caregiving is often inspiring and rewarding, it can also be difficult and challenging. And caring for someone with cognitive impairment can be much more difficult than caring for someone with a physical impairment who is full competent mentally and emotionally. The complications of confusion, forgetfulness, and memory loss, and the behaviors that go along with them, can be traumatic for the person with the disease and for the person providing care. Because of the dementia, neither the person involved nor the relationship will ever be the same

This disease is not just destructive to the person diagnosed with Alzheimer’s but also forever alters what family members have come to know, expect, and adore about their loved one over the years: those individual expressions and ways of interacting with which we become lovingly familiar with. The disease takes away pieces of our loved one, sneaking up little by little until family members can no longer recognize the person or the cherished relationship any more. And the toll on these family members is shattering, yet there is no end in sight, no cure, no prevention, and no way to effectively slow it down.

As The Dementia Whisperer, my mission is to provide you and all those who are caring for a loved one with any form of dementia support in the way of education, inspiration and encouragement along this challenging journey of dementia care. We are all so focused on the most horrific illness of our time (and well we should be) and the ruinous effect on those diagnosed with one of the over seventy estimated causes of dementia that we often overlook the long ranging damage inflicted on the family caregiver: the real hero of the “Alzheimer’s Generation.”

Caring for a person with dementia brings with it much more work (and stress) than caring for someone with other types of illnesses. It can be a long journey, and if caregivers do not take time for themselves, they will not be around to take care of the person with dementia. The following is the story I often share about my mother, Peggy, and is a classic example of the devastating effects of caregiver stress. She was thrust into the role of caring for my father, who was diagnosed with Alzheimer’s. When my father’s health began to fail and he began to present memory loss and other signs and symptoms of dementia, my mother stepped into the role of being his full-time caregiver. Some of her friends had been caregivers of spouses with dementia and she had witnessed what a hard and stressful job it was. I offered to help, but my mother insisted she was okay, and would alert me if his condition became unmanageable. However, in spite of this, disaster struck. One night, after two years well into the care journey, my mother and father sat down to dinner together. They were alone in their home. My mother suffered a massive heart attack. My father’s reactions to this emergency were slowed by his dementia, which was far more advanced than anyone realized. By the time help was summoned, my mom was already gone.

If only I had learned how the overwhelming stress of caring for a loved one devastates the primary family caregiver who selflessly takes on too much, refusing to ask for or accept help. This personal experience has driven my passion for education to all caregivers, both family and professional, in the awareness for caring for themselves, along with the tips and tools to assist them in effectively caring for adults with any form of dementia. My vision is to bring light into the darkness of dementia through support, encouragement, education, and hope. My book,  A Loving Approach to Dementia Care, is a special guide, filled with respect, calmness, creativity—and love.

WaymanLaura Wayman holds an associate in arts degree in gerontology and is a certified social services designee. She has over a decade of experience in and a strong dedication to quality aging. She is the director of dementia education and services for Comfort Keepers (Sacramento). the CEO of The Dementia Whisperers, Inc., and a sought-after speaker on issues of aging.

 

 

 

 

1 Comment

Filed under Consumer Health, Current Affairs, Dementia and Memory Loss, Emotional Health, Mental Health, Public Health

Should we bring historians to the movies?

Guest post by Thomas Leitch

Why do otherwise intelligent and discriminating people routinely come away from movies like Selma, American Sniper, The Imitation Game, and The Theory of Everything under the impression that their fictionalizations of history are true? Can’t they tell the difference between real life and the movies?

In a word, no, they can’t, says Jeffrey M. Zacks. Zacks, a professor of psychology and radiology at Washington University in St. Louis and author of Flicker: Your Brain on Movies, argues in a column in the 15 February issue of the New York Times that “our minds are well equipped to remember things that we see or hear—but not to remember the source of those memories”—because “our brain’s systems for source memory are not robust and are prone to failure.” Whether we read something in the newspaper, see footage of it on television or online, or watch it in a movie theater, we come away with much more vivid and precise memories of the content than the source. So we store memories from these very different sources in much the same way, and draw on them as equally authoritative when we search our memories for information.

So far, so illuminating. My only quarrel with Professor Zacks’s perceptive analysis of why people so routinely confuse movies with real life even if they know the movies are fictional concerns its last two sentences: “Having the misinformation explicitly pointed out and corrected at the time it was encountered substantially reduced its influence. But actually implementing this strategy—creating fact-checking commentary tracks that play during movies? always bringing a historian to the theater with you?—could be a challenge.”

The suggestion that bringing a historian along would protect me from indiscriminately remembering misinformation in movies implies that historians are uniquely qualified to pass judgment on factual accuracy. But in fact Professor Zacks’s whole column makes this assumption because it conflates history with what Professor Zacks calls “facts” and “the real world.” As police officers across the country agree, however, there’s a large and troublesome gap between even eyewitness testimony and the facts concerning real-world events. Sergeant Joe Friday was wrong: since the best testimony in the world is still testimony, not even the most reliable witness can give the police just the facts.

Historians are obviously more reliable than eyewitnesses in some ways. They’re more reflective, more disinterested, more likely to check their hypotheses against multiple sources. But since their testimony is always based on other people’s testimony, they’re less reliable than eyewitnesses in other ways. In addition, there are too many examples of biased histories (e.g., North Korean history textbooks, along with any number of textbooks produced around the world during wartime), racist histories (Woodrow Wilson’s History of the American People), and factually inaccurate histories (Michael Bellesisles’s Arming America: The Origins of a National Gun Culture) to justify any such assumption. Since the main reason for writing history, in fact, is to correct earlier histories, it’s doubtful that even historians trust other historians quite as completely as Professor Zacks thinks the rest of us ought to do. If they did, there would be no need for any further histories, only periodic updates, and historians would vanish.

I’d certainly agree that historians and filmmakers adopt very different attitudes toward history, facts, and the real world. But I’d still want to make distinctions among those three different subjects. And although I’m happy to acknowledge that filmmakers often play fast and loose with the facts, even when they advertise their products as “inspired by true events,” I’m a lot less confident than Professor Zacks that historians are so disinterested, reliable, and authoritative that they have a monopoly on the truth. So the next time I take a historian to the movies, I’ll be sure to follow it with dinner—not so that the historian can set me straight, but so that we can talk over the movie as more or less equally intelligent adults. I’m all for watching movies with a critical eye, but I’m not ready to farm out that job to the historians unless they understand that I plan to keep an equally critical eye on them. Meanwhile, I wonder exactly who’s going to be producing those fact-checking commentary tracks Professor Zacks mentions, and what makes them so sure that they have a corner on the truth, too.

leitchThomas Leitch is a professor of English and the director of the film studies program at the University of Delaware. He is the author of  Wikipedia U: Knowledge, Authority, and Liberal Education in the Digital Age and Film Adaptation and Its Discontents: From “Gone with the Wind” to “The Passion of the Christ” and is the coeditor of A Companion to Alfred Hitchcock.

 

 

1 Comment

Filed under Current Affairs, Film / Documentary, For Everyone, Popular Culture

The Press Reads: African American Faces of the Civil War

Guest post by Ronald S. Coddington

Coddington Chandler

Silas Chandler (right) and Sgt. Andrew Martin Chandler, Company F, forty-fourth Mississippi Infantry. Tintype by unidentified photographer (c. 1861). Collection of Andrew Chandler Battaile.

The Library of Congress recently acquired a tintype of Silas Chandler and Sgt. Andrew Martin Chandler. To understand how master and slave came to pose for this photograph, The Washington Post spoke to Ron Coddington about the portrait, as this story appears in Coddington’s latest book, African American Faces of the Civil War. Throughout Black History Month, we will offer a series of excerpts from recent publications, and today we share a selection from African American Faces of the Civil War.

“He Aided His Wounded Master”

 On September 20, 1863, during the thick of the fight at the Battle of Chickamauga, a Union musket ball tore into the right ankle and leg of Confederate Sgt. Andrew Chandler. A surgeon examined the nineteen-year-old Mississippian as he lay on the battlefield, determined the wound serious, and sent him to a nearby hospital.

Soon afterward, the injured sergeant was joined by Silas, a family slave seven years his senior. Silas attended his young master as a body servant—one of thousands of slaves who served in this capacity during the war.

According to family history, surgeons decided to amputate the leg. Silas stepped in. A descendant explained: “Silas distrusted Army surgeons. Somehow he managed to hoist his master into a convenient boxcar.” They rode by rail to Atlanta, where Silas sent a request for help to Andrew’s relatives. An uncle came and brought both men home to Mississippi, where they had started out two summers earlier.

Back in July 1861, Andrew had enlisted in a local military company, the Palo Alto Confederates. It later became part of the Forty-fourth Mississippi Infantry. He left home with Silas, one of about thirty-six slaves owned by his widowed mother Louisa.

Born in bondage on the Chandler plantation in Virginia, Silas moved with the family to Mississippi at about age two. He grew up to become a talented carpenter. The pennies he earned doing woodworking for people outside the family were saved in a jar hidden in a barn, according to his descendants. About 1860, he wed Lucy Garvin in a slave marriage not recognized by law at the time. A light-skinned woman classified as an octoroon, or one-eighth black, Lucy was the illegitimate daughter of a mulatto house slave named Polly and an unnamed plantation owner. Some said Cherokee Indian blood coursed through Lucy’s veins.

The following year, Silas bid his wife farewell and went to war with Andrew. Silas shuttled back and forth from home to encampment with much-needed supplies, delivering them to Andrew wherever he was as the Forty-fourth moved through Mississippi, Kentucky, and Tennessee. It is probable that it was Silas who brought word home to the Chandlers when Andrew fell into Union hands at the Battle of Shiloh in April 1862 and wound up in the prisoner of war camp at Camp Chase, Ohio. Andrew received a parole five months later and, after being exchanged, returned to his regiment.

In 1863 at Chickamauga, three of every ten men of the Fortyfourth who went into battle became casualties, including Andrew.  Thanks to Silas, he avoided an amputation. According to one of Andrew’s grandsons, “A home town doctor prescribed less drastic measures and Mr. Chandler’s leg was saved.”

Andrew “was able to do Silas a service as well,” according to the family. During one military campaign, Silas “constructed a shelter for himself from a pile of lumber, the story goes. A number of calloused Confederate soldiers attempted to take Silas’ shelter away from him, and when he resisted threatened to take his life. At this point Mr. Chandler and his comrade Cal Weaver, came to Silas’ defense and threatened the marauders with the same kind of treatment they had offered Silas. This closed the argument.”

Silas left Andrew to serve another member of the Chandler family—Andrew’s younger brother Benjamin, a private in the Ninth Mississippi Cavalry. The switch may have happened at Benjamin’s enlistment in January 1864. At the time, Andrew was absent from his regiment, likely at home recuperating from his Chickamauga wound.

Benjamin and his fellow horse soldiers went up against Union Maj. Gen. William T. Sherman’s army group in Georgia and the Carolinas. A portion of the Ninth, including Benjamin, as their final assignment, formed part of a large escort for Jefferson Davis when the Confederate president fled Virginia after Richmond fell. On May 4, 1865, near Washington, Georgia, Davis separated from his escort and rode off with a much smaller force in an effort to move faster and attract less notice as federal patrols infiltrated the area. Benjamin was among those who were left behind. Benjamin surrendered on May 10. Silas was also there. Union troops captured President Davis at nearby Irwinsville, Georgia, the same day.

Silas returned to Mississippi, rejoined Lucy, and met his son William, who had been conceived while Silas was home after Andrew’s capture at Shiloh and was born in early 1863. Silas and Lucy had a total of twelve children, five of whom lived to maturity.

Silas established himself as a talented carpenter in the town of West Point, Mississippi. He taught the trade to his sons—there were at least four—and all of them worked together. “They built some of the finest houses in West Point,” noted a family member, who added that Silas and his boys constructed “houses, churches, banks and other buildings throughout the state.” In 1868, Silas and other former slaves erected a simple altar at which to celebrate their Baptist faith, near a cluster of bushes on land adjacent to property owned by Andrew and his family. They later replaced it with a wood-frame church. In 1896, Silas’s son William helped to build a new structure on the same site.

Silas remained active as a Baptist and also as a Mason. He lived within a few miles of Andrew and Benjamin, who raised families and prospered as farmers.
Benjamin died in 1909. Silas died ten years later at age eighty-two in September 1919. Andrew survived Silas by only eight months; he died in May 1920.

In 1994, the Sons of Confederate Veterans and the United Daughters of the Confederacy conducted a ceremony at the 80 gravesite of Silas in recognition of his Civil War service. An iron cross and flag were placed next to his monument. This event prompted mixed reactions from Chandlers, black and white.

Myra Chandler Sampson wrote of her great-grandfather Silas: “He was taken into a war for a cause he didn’t believe in. He was dressed up like a Confederate soldier for reasons that may never be known.” She denounced the ceremony as “an attempt to rewrite and sugar-coat the shameful truth about parts of our American history.”

Andrew Chandler Battaile, great-grandson of Andrew, met Myra’s brother Bobbie Chandler at the ceremony. He said of the experience, “It was truly as if we had been reunited with a missing part of our family.”

Bobbie Chandler accepts the role of his great-grandfather. When asked about Silas and his connection to the Confederate army, he observed, “History is history. You can’t get by it.”

coddington_african_american_facesRonald S. Coddington is assistant managing editor at The Chronicle of Higher Education, editor and publisher of Military Images magazine, a contributing writer to the New York Times’s Disunion series, and a columnist for Civil War News. His trilogy of Civil War books, African American Faces of the Civil War, Faces of the Confederacy, and Faces of the Civil Warall published by Johns Hopkins University Press, combine compelling archival images with biographical stories to reveal the human side of the war. To read The Civil War Trust interview with Coddington click here.

1 Comment

Filed under African American Studies, Civil War

Valentine’s Day crush: heartthrobs and pinup picks for Jane Austen’s characters

Guest post by Janine Barchas

If Lydia Bennet hung celebrity pinups above her bed, whom might she have singled out among the rich and famous from the Georgian era?

The following speculations are rooted in historical truth.  Celebrity culture was in full swing when Jane Austen was born in 1775.  Although hers was the age before the photograph, painted portraits of the rich and famous were routinely reproduced by engravers and sold as inexpensive prints.  These black and white reproductions circulated images of famous actors, politicians, naval heroes, and members of the so-called bon ton as pinups for the middling consumer.  In this manner, the elegant paintings of even Sir Joshua Reynolds—England’s greatest portraitist—functioned as the modern photographs of Annie Leibovitz do today, making it hard to say whether he recorded or created celebrity with his art.  London teemed with well-stocked print shops from which to select this poster-art equivalent of the Georgian era.

In my book Matters of Fact in Jane Austen: History, Location, and Celebrity I trace Austen’s allusions to celebrities through her sly borrowings of names such as Dashwood, Wentworth, Woodhouse, Fitzwilliam, D’Arcy, and Tilney—powerful real-world surnames with tremendous political and historical cachet for Austen’s generation.  Valentine’s Day seems like the right occasion to imagine the next logical question (slightly less scholarly perhaps, but not less important): what people might Jane Austen’s characters have admired or hung up in their rooms?

Austen herself connects her fictional characters with celebrity portraits in a letter dated 24 May 1813, written to her sister Cassandra.  On that day Jane attended the first-ever retrospective of Reynolds’s work.  She writes that during her visits to London’s art galleries she looks for “Mrs Bingley” and “Mrs Darcy” on the walls—indicating that her fictional characters may have been inspired by actual celebrities.  She writes being “ very well pleased … with a small portrait of Mrs Bingley, excessively like her” in the Exhibition in Spring Gardens, but that she has not yet found “one of her Sister … Mrs Darcy.”  Although she declares that there is “no chance of her in the collection of Sir Joshua Reynolds’s Paintings which is now shewing in Pall Mall, & which we are also to visit,” she jokingly predicts “I dare say Mrs D. will be in Yellow.”

Since neither Austen nor her sensible heroines were mere groupies, it is predominantly her minor characters that I suspect of having celebrity pinups in their rooms.

 Over Lydia Bennet’s bed: “Portrait of Mrs. Abingdon as Miss Prue”

Current title: “Mrs. Abingdon (c. 1737-1815).”  Location: Yale Center of British Art. For more info see No. 103 at www.whatjanesaw.org.

Current title: “Mrs. Abingdon (c. 1737-1815).” Location: Yale Center of British Art. For more info see No. 103 at http://www.whatjanesaw.org.

At sixteen, Lydia shows herself “the most determined flirt that ever made herself and her family ridiculous.” This portrait of Frances Barton, the well-known actress who grew up in the slums of Drury Lane, became an icon of flirtation—the Georgian equivalent of Marilyn Monroe on a subway grate. After marrying her Irish music master, Mrs. Abington took to the stage and became known for her uninhibited comic roles. Reynolds paints Fanny in the character of Miss Prue from Congreve’s Love for Love, a famous comic part. The somewhat vulgar pose, which shows her leaning on the back of a chair with her thumb at her mouth, is meant to reflect the coy flirtations of the play’s country ingénue. In this context, even the lapdog adds to the sexual innuendo.

 In Kitty Bennet’s room: “Portrait of Kitty Fischer as Cleopatra”

Current title: “Catherine Fisher (Kitty) (d. 1767)” or “Cleopatra Dissolving the Pearl.” 
Location: Kenwood House, London.  For more info see No. 132 at www.whatjanesaw.org.

Current title: “Catherine Fisher (Kitty) (d. 1767)” or “Cleopatra Dissolving the Pearl.” Location: Kenwood House, London. For more info see No. 132 at http://www.whatjanesaw.org.

Although Kitty “will follow wherever Lydia leads,” she might have relished her unique connection to the celebrated Kitty Fischer—the most prominent London courtesan of the eighteenth-century, whose best-known portrait (also by Reynolds) likened her to a modern Cleopatra. Legend has it that Cleopatra made a bet with her lover Mark Antony to see who could spend the greater fortune on a meal. She won by dissolving a large and valuable pearl in vinegar (some say wine), defiantly drinking down the concoction to show her disdain for wealth. Kitty Fisher’s extravagance was similarly the stuff of legend: she ate a £100 bank note (the equivalent of a year’s salary for the middling class) on buttered bread, savoring the shock value this produced in her companions. By comparing Fischer to Cleopatra, the portrait counterbalances the courtesan’s legendary recklessness with the gravity of history. Due to Fischer’s dubious celebrity, the diminutive “Kitty” for Catherine was, Austen surely knew, associated in popular culture with loose morals.

In Mrs Bennet’s sitting room:  “Portrait of Mrs Baldwyn”

Current title: “Mrs. Baldwin (1763-1839).” 
Location: Bowood House, Wiltshire.  For more info see No. 25 at www.whatjanesaw.org.

Current title: “Mrs. Baldwin (1763-1839).” 
Location: Bowood House, Wiltshire. For more info see No. 25 at http://www.whatjanesaw.org.

Although expressions of fandom are usually confined to the bedrooms of young people, Mrs Bennet—who insists upon her equal fondness for sea bathing and redcoats—is not a woman likely to be outdone by her youngest daughters. Imagine therefore this portrait of the standout Jane Baldwyn somewhere near the Bennet stash of smelling salts. Although as the daughter of a Greek merchant Jane was not a woman of title, she married the British Consul to Alexandria. Back in England, Jane’s exotic features received much notice from society. Mrs Baldwyn’s costume has been interpreted by some as the national costume of a Greek lady and others as a fancy dress worn at a costume ball given by the King. Mrs Baldwyn, like Mrs Bennet, was not a woman afraid of attracting notice.

 Above the desk of Marianne Dashwood:
“Portraits of Elizabeth and Francis Russell”

Current title: “Lady Elizabeth Keppel (1739-68).” 
Location: Woburn Abbey.  For more info see No. 22 at www.whatjanesaw.org.

Current title: “Lady Elizabeth Keppel (1739-68).” 
Location: Woburn Abbey. For more info see No. 22 at http://www.whatjanesaw.org.

Current title: “Francis Russell, Marquess of Tavistock (1739-67).” 
Location: Blenheim Palace.  For more info see No. 128 at www.whatjanesaw.org.

Current title: “Francis Russell, Marquess of Tavistock (1739-67).” 
Location: Blenheim Palace. For more info see No. 128 at http://www.whatjanesaw.org.

The dashing Marquis of Tavistock and his young wife, Elizabeth, were the type of doomed celebrity couple that Marianne Dashwood, with her Romantic “passion for dead leaves”, cherishes. At 25, the beautiful Lady Elizabeth Keppel wedded the well-traveled and handsome Francis Russell, Marquess of Tavistock. After the arrival of a son, the Russells were England’s poster couple for wedded bliss—additionally blessed with court connections, shared intellectual interests, and wealth. Three years into this happy marriage, Francis was tragically killed by a fall from his horse. Within months Elizabeth, who is said to have pined away from grief, joined her young husband in death. Imagine Reynolds’s portrait of Elizabeth, dressed in the bridesmaid gown that she wore to the wedding of George III and Queen Charlotte, as an omen of tragic romance above the writing desk where Marianne composes her tear-stained letters to Willoughby.

In Mary Crawford’s boudoir: “Portrait of Mary Beauclerk, Lady Charles Spencer”

Current title: “Lady Charles Spencer (1743-1812).” 
Location: Private Collection.  For more info see No. 97 at www.whatjanesaw.org.

Current title: “Lady Charles Spencer (1743-1812).” 
Location: Private Collection. For more info see No. 97 at http://www.whatjanesaw.org.

In 1762, Mary Beauclerk, daughter of Lord Vere, married Lord Charles Spencer, second son of the third Duke of Marlborough, a famous politician. Unusual for a woman, Mary had her portrait painted with her horse. She wears a striking red riding habit cut in the manner of a man’s frock coat—with a waistcoat fastened in the masculine way from left to right. While the costume, which includes a long skirt, stops well short of androgyny, the portrait’s unconventional masculine flair conveys a woman with a daring sense of style and forceful personality. Given the emphasis in Mansfield Park on the appropriation by Mary Crawford of poor Fanny Price’s horse as a symbol of upstart ambitions to unseat Fanny in Edmund’s affections, Mary’s admiration of this society hostess (and namesake) seems almost certain.

Over the sickbed of Louisa Musgrove: “Portrait of Captain John Hamilton”

Current title: “Captain John Hamilton (d. 1755).” 
Location: Abercorn Heirlooms Trust.  For more info see No. 42 at www.whatjanesaw.org.

Current title: “Captain John Hamilton (d. 1755).” 
Location: Abercorn Heirlooms Trust. For more info see No. 42 at http://www.whatjanesaw.org.

After her fall along the Cobb, Louisa Musgrove likely stares long and hard at portraits of celebrity naval officers such as John Hamilton, a legendary eighteenth-century sailor. As testimony to his travels and, possibly, to his famed good humor, Hamilton is flamboyantly dressed in the costume of a Hungarian hussar, complete with mustache, fur busby, small dagger, and a dramatic fur coat that might be bear, fox, or even wolf. John Hamilton accompanied George II from Hanover in 1736 on a ship called Louisa, a fact that the impressionable Miss Musgrove is free to interpret as significant. He was eventually appointed captain of a ship that tragically struck what became known as Hamilton shoal in commemoration of how it caused him and most of his crew to drown.

If, like Jane Austen herself, you enjoy a little celebrity spotting, you might visit the digital recreation of the 1813 art exhibition that she attended: www.whatjanesaw.org. All of the above portraits by Sir Joshua Reynolds, and over a hundred more, hang in the What Jane Saw e-gallery in precisely the same arrangement on the walls as witnessed by Jane Austen on 24 May 1813.  This Valentine’s Day, go ahead and get a crush on someone Austen knew!

 

jbarchasJanine Barchas, is  Professor of English at University of Texas where she teaches Austen in Austin.  She is author of Matters of Fact in Jane Austen: History, Location and Celebrity and the creator of What Jane Saw , a digital reconstruction of an 1813 art gallery.  As co-curator of the “Will & Jane” exhibition at The Folger Shakespeare Library in 2016, she’ll next explore the parallel afterlives of Shakespeare and Austen and their rise to literary superstar status.

 

 

 

3 Comments

Filed under Fine art, For Everyone, Libraries, Literature

The Press Reads: Teaching Machines

The following post about MOOCs is an excerpt of Teaching MachinesLearning from the Intersection of Education and Technology, by Bill Ferster

The allure of educational technology is easy to understand. In almost every other area of our modern world, machines have significantly contributed to modern life, but they are largely missing from our schools. A nineteenth-century visitor would feel quite at home in a modern classroom, even at our most elite institutions of higher learning. People have looked to machines to solve issues in most other endeavors in their lives, hoping to gain improved efficiency, cost, and time savings. So it is not surprising that technology has been employed for both noble (better learning outcomes) and less than noble reasons (teacher proofing).

At the college level, the pressures of skyrocketing costs and competition from e-learning have made online educational technology a source of much discussion. Teresa Sullivan, the president of the University of Virginia, was summarily fired in a coup d’etat in 2012 (and subsequently rehired because of protests from an outraged faculty and campus community) ostensibly because the university’s governing board of visitors perceived her not to be embracing online education rapidly enough.

New York University professor Clay Shirky makes a strong point that the college experience we fantasize for our children, where white-haired professors wearing leather-patched tweed jackets discuss literature in small seminars, is a reality only for a very small percentage of students at elite institutions. “The top 50 colleges on the U.S. News and World Report list (which includes most of the ones you’ve heard of) only educate something like 3 percent of the current student population.” The majority of students sit in impersonal classes with hundreds of other students to be lectured by instructors of varying competence, and they emerge from college with a degree plus often a crushing burden of debt. It is little wonder that the siren song of the new forms of technology-driven and potentially scalable forms of education such as e-learning is resonating with some higher education leaders.

If one is to believe the press, from obscure educational journals to the New York Times, the teaching machine for the start of the twenty-first century is the MOOC. Massive open online courses are the latest contender, where courses from commercial companies and prestigious universities such as Stanford, MIT, and Harvard are offered online to huge numbers of participants, often thousands at a time. There are those who view MOOCs as the savior to managing the ever-spiraling cost of higher education, and others who see them as sowing the seeds of the demise of the university as we know it. The truth, of course, lies somewhere between.

It is important to see some of the potentially threatening innovations such as MOOCs in the same way that their providers see them: as experiments. Daphne Koller, co-founder of venture-capital-funded MOOC developer Coursera, views the MOOC as an unprecedented opportunity to use the large numbers of people to scientifically test what works by doing controlled experiments she refers to as “A/B testing,” where a change is made to instruction for some population of students and not for others. Because of the large numbers of students not typically available in traditional educational research, the results of the change can be tested empirically for its effectiveness and the overall instruction changed accordingly.

Clearly, students need to be prepared to use the technological tools of their generation; today that means the computer or tablet. But the successful introduction of technology in whole-class instruction does not easily fit the teaching methods currently employed in most U.S. schools. The majority of K–12 schools do not provide students with laptops or encourage them to use them in the classroom. When they do, students are disruptively herded into computer labs, or carts filled with laptops are wheeled in for specific curricular activities. The use of individual computers makes teachers less willing to introduce technology into their classrooms because it interferes with the whole-class nature of current instructional practice. Contrast this with the almost uniform adoption of digital projectors and “smart boards,” which evolved directly from the last generation of education technology, the film projector and the chalkboard.

One of the more concerning issues about the commercial MOOC providers is the source of their funding, venture capitalists. Venture capital is provided by investment firms to fund early stage companies. These firms typically invest in a large number of startups with the assumption that 90 percent of them will fail, but the 10 percent that thrive will yield a return on investment of at least 300 percent (known as a “3-bagger”). This strategy has been extremely successful in the high-technology sector and in large part is responsible for the phenomenal products and companies that have emerged from Silicon Valley. Venture capital firms provide a strong support network to help guide new entrepreneurs, but their model has its darker side.

There is an inherent instability in any “disposable” relationship. The funded companies typically cede a significant amount of control in exchange for the millions of dollars they receive. When the company delivers the kinds of profits that the funders see as significant, that control can be very constructive and nurturing. But if the company underperforms or takes longer to deliver, it can find itself among the “walking dead,” with just enough capital to stay in business but not enough to grow, closed down completely, or merged with another of the firm’s portfolio of funded companies.

fersterBill Ferster is a research professor at the University of Virginia’s Curry School of Education and the director of visualization for the Sciences, Humanities & Arts Technology Initiative (SHANTI). He is the author of  Teaching Machines: Learning from the Intersection of Education and Technology and Interactive Visualization: Insight through Inquiry.

Comments Off on The Press Reads: Teaching Machines

Filed under Academia, Education, Higher Education

Writing “Politics in the Corridor of Dying”

Guest post by Jennifer Chan

How do you write about a topic on which over 100,000 journal articles, books, conference papers, scientific reports, government plans, and United Nations documents have already been published? The question nagged at me for months. The subject of AIDS seemingly swelled by the day.

What angle should I take? Which data should I privilege? What should I drop? Which theory is the most appropriate? How many chapters do I need to write? Eleven? That’s too many, I thought to myself. These days, no one would read a 500-page book on AIDS. The times have changed. Now, there are good antiviral drugs, and in many countries, people with AIDS live a relatively healthy life. Drug prices have come down so much that it’s almost hard to scream at Big Pharma anymore. And the public—donors, the UN, governments, scientists, researchers—have moved on to some other health crisis: maternal and child health; health innovation; health systems. Gone is the era of vertical funding, of disproportionate and distorting funding on any one specific disease such as AIDS.

The data must have been fed up with my staring at it for months on end: finally, it seemed to skip my brain and walk straight up to the keyboard. The book was written—poured out—within two months (don’t ask how much time I took to revise, though!).

Coming up with the title (Politics in the Corridor of Dying) was a different story. Those six words consumed many minutes, hours, and days of my waking time and my insomniac nights. I toyed with so many versions that I couldn’t keep track. One day, while walking back to my hotel after an interview with a Russian activist from the underground patient control movement, I chanced upon a tiny museum on a quiet street of St. Petersburg. On the brick wall at the entrance hung a poster scribbled with the word “necrorealism.” My curiosity was piqued. I am a big fan of contemporary and abstract art and know a bit about abstract expressionism, Dadaism, minimalism, and even nouveau réalisme, but I had never heard of necrorealism.

After researching the movement, I discovered a phrase that became my natural title. The “corridor of dying,” according to Vladimir Kustov, a founding member of necrorealism in 1970s Russia, refers to the interval between life and death, that unrepresentable purgatory.

Both the movement and its art was an epiphany for me. The stark necrorealist paintings and black-and-white film footage that I saw in the museum helped me visualize the long corridor of dying that we have been collectively traversing during the past 30 years of the history of AIDS.

Passing through dilapidated labs, lethargic ministries of health, shiny pharmaceutical company boardrooms, complacent UN secretariats, and the hustle and bustle of clinics and NGO offices, AIDS activists fought hard and made some significant gains. What they have done involves more than learning and appropriating the language of science to be on par with the experts; decrying Pharma greed; challenging an outdated UN governance structure; and, when necessary, pointing fingers at themselves as community “experts.”

There really is only one argument that I want to make in this book (which, by the way, was eventually compressed into merely 268 pages, coming to under 10 pages per year of AIDS history, in case you are looking for quick reads on world affairs!). Beyond the concrete gains that the movement realized in terms of increased funding, treatment access, and millions of lives saved and prolonged, the single biggest achievement of AIDS activism over the past three decades lies in the fact that the movement has exposed the fundamental legitimation crises of four contemporary regimes of power: scientific monopoly, market fundamentalism, governance statism, and community control. Activists have cracked open the formerly closed doors of knowledge and power networks and forcing them to diversify and democratize.

This is obviously a contentious argument to make. Is the glass half full, or half empty? Have activists brought some fundamental changes to global health governance, or have they failed? Let me hear your thoughts! Welcome to the corridor of dying . . .

chanJennifer Chan is an associate professor in the Institute for Gender, Race, Sexuality, and Social Justice at the University of British Columbia. She is the editor of Another Japan Is Possible: New Social Movements and Global Citizenship Education and the author of Gender and Human Rights Politics in Japan: Global Norms and Domestic Networks. Her most recent book is Politics in the Corridor of Dying: AIDS Activism and Global Health Governance.

Comments Off on Writing “Politics in the Corridor of Dying”

Filed under Current Affairs, Health and Medicine

Casualties and convictions: Americans’ response to casualties may hold lessons for France after Charlie Hebdo

Guest post by Zachary Shore

Shortly after the horrifying Paris attacks, French Prime Minister Manuel Valls declared war. France, he said, must defend its values of liberty and fraternity. Less than two weeks later the French government announced sweeping new measures, including hiring 2,600 counterterrorism officers, widening the use of telephone surveillance, and expanding data collection on citizens. But for how long will the French people support encroachment on their civil liberties? Will anger at extremists persist, or will fear of further casualties gradually erode support for a robust war on terror? America might offer France an example.

Contrary to what the terrorists may think, Americans are not deterred by casualties. What weakens their resolve is moral ambiguity. One of the more shocking stories of the previous year involved the savage beheadings of American journalists James Foley and Steven Sotloff. The Islamic State’s leaders apparently believed that brutally beheading Americans could act as a deterrent, dissuading further U.S.-led airstrikes or interventions against them. It seemed a reasonable assumption. Americans were weary from years of combat in Afghanistan and Iraq, and wary of another potential quagmire in someone else’s civil war. But to ISIS’s surprise, its strategy badly backfired. American public opinion turned dramatically in favor of attacking ISIS forces. So what was the flaw in ISIS thinking?

ISIS got it wrong because its leaders misread the lessons of the past. Americans are not deterred by losses; they are incensed by them. The locus of attack, on American soil or abroad, is irrelevant. What saps their stomach for a fight is not fear but ambiguity.

Vietnam is a clear example. Le Duan, the man running North Vietnam for most of the war, advised his comrades to kill as many Americans as possible in order to undermine Americans’ support for the war. That strategy succeeded in Vietnam not because of the casualties themselves, but rather because of the moral ambiguity that the U.S. public came to feel about the conflict. Increasingly the public feared that they were not just fighting communists, but viciously destroying lives in someone else’s civil war. Anger from the Tonkin Gulf could not match Americans’ growing qualms. A comparable unease arose when American Marines were targeted in Lebanon in 1983, and again in 1993 when American corpses were dragged through Somalia’s streets. It was not the casualties that Americans could not endure; it was the uncertainty over the rightness of their cause.

Pearl Harbor, by contrast, seemed perfectly black and white. On paper, the Japanese plan must have seemed sensible to some. Because the United States was so much stronger, Japan would deter America by dealing it a knock-out blow. Destroying the aircraft carriers in Hawaii would buy time for Japan’s advance and weaken the American will for a protracted war. But Pearl Harbor actually revealed how badly the Japanese understood their enemy. The surprise attack evaporated previously potent anti-war sentiment and solidified American resolve. The nation determined to defeat Japan at whatever the cost. And the cost was steep indeed, in both American and Japanese lives.

Pearl Harbor exposed more than just the Japanese misreading. It also revealed a critical aspect of the American character. Unwarranted attacks against Americans who were not themselves involved in combat posed an unmistakable breach of fairness. And whether it is the German sinking of an ocean liner in 1915, the Japanese strike on Pearl Harbor, or the murder of 3,000 innocent lives on 9/11, when Americans perceive attacks to be unjust, they will act.

9/11 had a crucial Pearl Harbor-ness about it. It struck Americans as supremely unfair. Nothing, from the American perspective, could have justified such a massive assault on civilians in their workplace. That the public eventually soured on the war in Iraq was not because its moral outrage over 9/11 had faded. It was because the moral ambiguity of fighting in Iraq became apparent. Saddam was not involved in 9/11. The weapons of mass destruction weren’t there. And the dream of transforming the Middle East by bringing democracy to Iraq looked increasingly absurd. Americans didn’t lose their nerve because of casualties; they lost their will to fight someone else’s battles when the moral impetus grew muddy.

ISIS’s actions eliminated all uncertainty. The beheadings removed any doubt about their barbaric nature. Their crimes invigorated Americans’ resolve precisely because they saw those savage killings as immeasurably unjust. Unwittingly, ISIS Pearl Harbored itself. It aroused the nation’s moral outrage—the force that, if sustained, truly drives its will to win. The task of leadership is to maintain that sense of outrage as ambiguity creeps in.

The question for the French public today is whether it can maintain its resolve in the face of terror. And the challenge for the Valls regime will be to devise a sensible strategic response that preserves the public’s long-term support. After 9/11 France declared, “We are all Americans now.” After January 7th, the French may need some good old American moral outrage.

shoreZachary Shore is the author of Breeding Bin Ladens: America, Islam, and the Future of Europe. He is an associate professor of National Security Affairs at the Naval Postgraduate School and a Senior Fellow at the Institute of European Studies at the University of California, Berkeley. He has served on the Policy Planning Staff at the U.S. Department of State through a fellowship from the Council on Foreign Relations. 

Comments Off on Casualties and convictions: Americans’ response to casualties may hold lessons for France after Charlie Hebdo

Filed under Foreign Policy, Iraq, Politics