Theobald Wolfe Tone


Theobald Wolfe Tone, commonly known as Wolfe Tone (June 20, 1763 – November 19, 1798), was a leading figure in the United Irishmen Irish independence movement and is regarded as the father of Irish Republicans. Tone himself admitted that, with him, hatred of England had always been “rather an instinct than a principle.” Until his views became more generally accepted in Ireland he was prepared to work for reform as distinguished from revolution. He wanted to root out the popular respect for the names of James Caulfeild, 1st Earl of Charlemont and Henry Grattan, transferring the leadership to more militant campaigners. While Grattan was a reformer and a patriot without democratic ideas; Wolfe Tone was a revolutionary thinker and activist whose principles were drawn from the French Convention. Grattan’s political philosophy was allied to that of Edmund Burke; Tone was a disciple of Georges Danton and Thomas Paine. His ardency brought him to an abrupt end on the guillotine.

Early years

Tone was born in Dublin, the son of a Church of Ireland, Protestant coach-maker. Tone studied law at Trinity College, Dublin and qualified as a barrister from King’s Inns at the age of 26, attendeding the Inns of Court in London. As a student, he eloped with Elizabeth Witherington, sixteen year old daughter of William Witherington, of Dublin, and his wife, Catherine Fanning. They had two sons and a daughter. She survived him 50 years.

Politician

Tone, disappointed at finding no notice taken of a scheme for founding a military colony in Hawaii which he had submitted to William Pitt the Younger, turned to Irish politics. His pamphlet attacking the administration of the marquess of Buckingham in 1790, brought him to the notice of the Whig club; and in September 1791, he wrote a remarkable essay over the signature “A Northern Whig,” of which 10,000 copies were said to have been sold.

The principles of the French Revolution were at this time being eagerly embraced in Ireland, especially among the Presbyterians of Ulster. Prior to the appearance of Tone’s essay, a meeting had been held in Belfast where a resolution in favor of the abolition of religious disqualifications had given the first sign of political sympathy between the Roman Catholics and the Protestant dissenters (“Whigs”) of the north. The essay of “A Northern Whig” emphasized the growing breach between Whig patriots like Henry Flood and Henry Grattan, who aimed at Catholic emancipation and parliamentary reform without breaking the connection with England, and the men who desired to establish a separate Irish republic. Tone expressed contempt for the constitution which Grattan had so triumphantly extorted from the British government in 1782; and, himself an Anglican, he urged co-operation between the different religious sects in Ireland as the only means of obtaining complete redress of Irish grievances.

Society of the United Irishmen

In October 1791, Tone converted these ideas into practical policy by founding, in conjunction with Thomas Russell, Napper Tandy, and others, the Society of the United Irishmen. The original purpose of this society was no more than the formation of a political union between Roman Catholics and Protestants, with a view to obtaining a liberal measure of parliamentary reform. It was only when it was obvious that this was unattainable by constitutional methods that the majority of the members adopted the more uncompromising opinions which Wolfe Tone held from the first, and conspired to establish an Irish republic by armed rebellion.

It is important to note the use of the word “united.” This was what particularly alarmed the British aristocracy in Westminster, as they saw the Catholic population as the greatest threat to their power in Ireland. However, Tone’s ideas would have been very difficult to apply to the real situation in Ireland, as the Catholics had different concerns of their own, these usually being having to pay the tithe bill to the Anglican Church of Ireland and the huge amounts they had to pay in order to lease land from the Protestant Ascendancy. Eighteenth century Ireland was a sectarian state, ruled by a small Anglican minority, over a majority Catholic population, some of whose ancestors had been dispossessed of land and political power in the seventeenth century Plantations of Ireland. This was in part also an ethnic division, the Catholics being descended from native Irish, Normans, and “Old English,” and the Protestants more often from English and Scottish settlers. Such sectarian animosity undermined the United Irishmen movement: Two secret societies from Ulster fought against each other, the Peep O’Day Boys, who were made up mostly of Protestants, and the Defenders, who were made up of Catholics. These two groups clashed frequently throughout the latter half of the eighteenth century and sectarian violence worsened in the county Armagh area from the mid 1790s. This undermined Wolfe Tone’s movement, as it suggested that Ireland couldn’t be united and that religious prejudices were too strong. In addition, the militant Protestant groups, including the newly founded Orange Order, could be mobilized against the United Irishmen by the British authorities.

However, democratic principles were gaining ground among the Catholics as well as among the Presbyterians. A quarrel between the moderate and the more advanced sections of the Catholic Committee led, in December 1791, to the secession of sixty-eight of the former, led by Lord Kenmare. The direction of the committee then passed to more violent leaders, of whom the most prominent was John Keogh, a Dublin tradesman, known as “Gog.” The active participation of the Catholics in the movement of the United Irishmen was strengthened by the appointment of Tone as paid secretary of the Roman Catholic Committee in the spring of 1792. Despite his desire to emancipate his fellow countrymen, Tone had very little respect for the Catholic faith. When the legality of the Catholic Convention, in 1792, was questioned by the government, Tone drew up for the committee a statement of the case on which a favorable opinion of counsel was obtained; and a sum of £1500 with a gold medal was voted to Tone by the Convention when it dissolved itself in April 1793. A petition was made to the king early in 1793, and that year the first enfranchisement of Catholics was enacted, if they had property as “forty shilling freeholders.” They could not, however, enter parliament or be made state officials above grand jurors. Burke and Grattan were anxious that provision should be made for the education of Irish Roman Catholic priests in Ireland, to preserve them from the contagion of Jacobinism in France.

Revolutionary in exile

In 1794, the United Irishmen, persuaded that their scheme of universal suffrage and equal electoral districts was not likely to be accepted by any party in the Irish parliament, began to found their hopes on a French invasion. An English clergyman named William Jackson, who had imbibed revolutionary opinions during his long stay in France, came to Ireland to negotiate between the French committee of public safety and the United Irishmen. Tone drew up a memorandum for Jackson on the state of Ireland, which he described as ripe for revolution; the memorandum was betrayed to the government by an attorney named Cockayne, to whom Jackson had imprudently disclosed his mission; and in April 1794, Jackson was arrested on a charge of treason.

Several of the leading United Irishmen, including Reynolds and Hamilton Rowan, immediately fled the country; the papers of the United Irishmen were seized, and for a time the organization was broken up. Tone, who had not attended meetings of the society since May 1793, remained in Ireland until after the trial and suicide of Jackson in April 1795. Having friends among the government party, including members of the Beresford family, he was able to make terms with the government, and in return for information as to what had passed between Jackson, Rowan and himself, he was permitted to emigrate to the United States, where he arrived in May 1795. Before leaving, he and his family traveled to Belfast, and it was at the summit of Cave Hill that Tone made the famous Cave Hill compact with fellow United Irishmen, Russel and McCracken, promising “Never to desist in our efforts until we subvert the authority of England over our country and asserted our independence.” Living in Philadelphia, he wrote a few months later to Thomas Russell expressing unqualified dislike of the American people, whom he was disappointed to find no more truly democratic in sentiment and no less attached to authority than the English; he described George Washington as a “high-flying aristocrat,” and he found the aristocracy of money in America still less to his liking than the European aristocracy of birth.

Tone did not feel himself bound by his agreement with the British government to abstain from further conspiracy; and finding himself at Philadelphia in the company of Reynolds, Rowan, and Tandy, he went to Paris to persuade the French government to send an expedition to invade Ireland. In February 1796, he arrived in Paris and had interviews with De La Croix and Carnot, who were impressed by his energy, sincerity, and ability. A commission was given him as adjutant-general in the French army, which he hoped might protect him from the penalty of treason in the event of capture by the English; though he himself claimed the authorship of a proclamation said to have been issued by the United Irishmen, enjoining that all Irishmen taken with arms in their hands in the British service should be instantly shot; and he supported a project for landing a thousand criminals in England, who were to be commissioned to burn Bristol, England, and commit other atrocities. He drew up two memorials representing that the landing of a considerable French force in Ireland would be followed by a general rising of the people, and giving a detailed account of the condition of the country.

Hoche’s expedition and the 1798 rebellion

The French Directory, which possessed information from Lord Edward FitzGerald and Arthur O’Connor confirming Tone, prepared to dispatch an expedition under Louis Lazare Hoche. On December 15, 1796, the expedition, consisting of forty-three sail and carrying about 14,000 men with a large supply of war material for distribution in Ireland, sailed from Brest. Tone accompanied it as “Adjutant-general Smith” and had the greatest contempt for the seamanship of the French sailors, who were unable to land due to severe gales. They waited for days off Bantry Bay, waiting for the winds to ease, but eventually returned to France. Tone served for some months in the French army under Hoche; in June 1797, he took part in preparations for a Dutch expedition to Ireland, which was to be supported by the French. But the Dutch fleet was detained in the Texel for many weeks by unfavorable weather, and before it eventually put to sea in October (only to be crushed by Duncan in the battle of Camperdown), Tone had returned to Paris and Hoche, the chief hope of the United Irishmen, was dead.

Napoleon Bonaparte, with whom Tone had several interviews about this time, was much less disposed than Hoche had been to undertake in earnest an Irish expedition; and when the rebellion broke out in Ireland in 1798, he had started for Egypt. When, therefore, Tone urged the Directory to send effective assistance to the Irish rebels, all that could be promised was a number of small raids to descend simultaneously on different points of the Irish coast. One of these under General Humbert succeeded in landing a force near Killala, County Mayo, and gained some success in Connacht (particularly at Castlebar) before it was subdued by Lake and Charles Cornwallis. Wolfe Tone’s brother, Matthew, was captured, tried by court-martial, and hanged; a second raid, accompanied by Napper Tandy, came to disaster on the coast of Donegal; while Wolfe Tone took part in a third, under Admiral Bompard, with General Hardy in command of a force of about 3000 men. This encountered an English squadron at Rathmullan on Lough Swilly on October 12, 1798. Tone, on board the Hoche, refused Bompard’s offer of escape in a frigate before the action, and was taken prisoner when Hoche surrendered.

Death

When the prisoners were landed a fortnight later, Sir George Hill recognized Tone in the French adjutant-general’s uniform. At his trial by court-martial in Dublin, Tone made a speech avowing his determined hostility to England and his intention “by frank and open war to procure the separation of the countries”.

Recognizing that the court was certain to convict him, he asked “… that the court should adjudge me to die the death of a soldier, and that I may be shot….” Reading from a prepared speech, he defended his view of a military separation from Britain (as had occurred in the fledgling United States), and lamented the outbreak of mass violence:

“Such are my principles such has been my conduct; if in consequence of the measures in which I have been engaged misfortunes have been brought upon this country, I heartily lament it, but let it be remembered that it is now nearly four years since I have quitted Ireland and consequently I have been personally concerned in none of them; if I am rightly informed very great atrocities have been committed on both sides, but that does not at all diminish my regret; for a fair and open war I was prepared; if that has degenerated into a system of assassination, massacre, and plunder I do again most sincerely lament it, band those few who know me personally will give me I am sure credit for the assertion.”

To the people, he had the following to say: “I have labored to abolish the infernal spirit of religious persecution by uniting the Catholics and Dissenters,” he declared from the dock. “To the former, I owe more than ever can be repaid. The service I was so fortunate as to render them they rewarded munificently but they did more: When the public cry was raised against me, when the friends of my youth swarmed off and left me alone, the Catholics did not desert me.

They had the virtue even to sacrifice their own interests to a rigid principle of honor. They refused, though strongly urged, to disgrace a man who, whatever his conduct towards the Government might have been, had faithfully and conscientiously discharged his duty towards them and in so doing, though it was in my own case, I will say they showed an instance of public virtue of which I know not whether there exists another example.”

His eloquence, however, was in vain, and his request to be shot denied. He was sentenced to be hanged on November 12, 1798. Before this sentence was carried out, he suffered a fatal neck wound, self-inflicted according to contemporaries, from which he died several days later at the age of 35 in Provost’s Prison, Dublin, not far from where he was born.

Support from Lord Kilwarden

A long-standing belief in Kildare is that Tone was the natural son of a neighboring landlord at Blackhall, near Clane, called Theobald Wolfe. This man was certainly his godfather, and a cousin of Arthur Wolfe, 1st Viscount Kilwarden, who warned Tone to leave Ireland in 1795. Then, when Tone was arrested and brought to Dublin in 1798, and facing certain execution, it was Kilwarden (a senior judge) who granted two orders for Habeas Corpus for his release. This was remarkable, given that the rebellion had just occurred with great loss of life, and one that could never be enlarged upon, as Kilwarden was unlucky enough to be killed in the riot starting Emmet’s revolt in 1803. The suggestion is that the Wolfes knew that Tone was a cousin; Tone himself may not have known. As a pillar of the Protestant Ascendancy and notorious at the time for his prosecution of William Orr, Kilwarden had no motive whatsoever for trying to assist Tone in 1795 and 1798. Portraits of Wolfes around 1800, arguably show a resemblance to the rebel leader.

Emily Wolfe (1892-1980), the last of the Wolfes to live in Kildare, continued her family tradition of annually laying flowers at Tone’s grave until her death.

Legacy

“He rises,” says William Lecky, the nineteenth century historian, “far above the dreary level of commonplace which Irish conspiracy in general presents. The tawdry and exaggerated rhetoric; the petty vanity and jealousies; the weak sentimentalism; the utter incapacity for proportioning means to ends, and for grasping the stern realities of things, which so commonly disfigure the lives and conduct even of the more honest members of his class, were wholly alien to his nature. His judgment of men and things was keen, lucid and masculine, and he was alike prompt in decision and brave in action.”

In his later years, he overcame the drunkenness that was habitual to him in youth; he developed seriousness of character and unselfish devotion to the cause of patriotism; and he won the respect of men of high character and capacity in France and the Netherlands. His journals, which were written for his family and intimate friends, give a singularly interesting and vivid picture of life in Paris in the time of the Directory. They were published after his death by his son, William Theobald Wolfe Tone (1791-1828), who was educated by the French government and served with some distinction in the armies of Napoleon, emigrating after Waterloo to America, where he died, in New York City, on October 10, 1828, at the age of 37. His mother, Matilda (or Mathilda) Tone also emigrated to the United States, and she is buried in Greenwood Cemetery in Brooklyn, New York

 

Ahmad Shāh Durrānī


Ahmad Shāh Durrānī (c. 1723 – 1773), also known as Ahmad Shāh Abdālī  and born as Ahmad Khān Abdālī, was the founder of the Durrani Empire and is regarded by many to be the founder of modern Afghanistan. The Pashtuns of Afghanistan often call him Bābā (“father”). He also used the title “pearl of pearls,” or “pearl of the age” (Durr-i-Durrani), hence the name of his dynasty. Following the assassination of Nader Shah Afshar, he became the Amir of Khorasan. After consolidating his rule over territory stretching between Amu Darya and the Indian Ocean and from Khorasan into Kashmir, the Punjab, and Sind he invaded India on nine occasions. At the time, only the Ottoman Empire was larger in the Muslim world. In 1757, he sacked the cities of Delhi, Agra, Mathura, and Vrndavana, but made no attempt to establish rule there. He confronted the Sikhs in the Punjab during an extended campaign, eventually abandoning that region.

Faced with unrest at home towards the end of his life, he concentrated on domestic matters. He replaced weak regional rulers in his Empire with a strong centralized government. His policy of appointing counselors drawn from the most important tribal sirdars helped to unite these traditionally fractious units under his rule. Unable to maintain this unity, his successors oversaw the Empire’s disintegration into smaller, rival units. Ahmad Shāh Durrānī’s legacy suggests that, faced with a history of strong tribal and weak national authority, unity can be achieved by sharing power between the center and local elites. However, this unity was fragile, requiring more nurture than his heirs were able or willing to provide. The key challenge facing Afghanistan today remains the task of building a genuine, indigenous national unity that transcends historical tribal loyalties. Ahmad Shāh Durrānī is remembered as a just and moderate ruler. He was also a poet. The last Durrani ruler, Ayub Shah, died in 1823, ending the dynasty.

Early years

Ahmad Khan (later Ahmad Shah), from the Sadozai section of the Popalzai clan of the Abdali tribe of the Pashtuns, was born in Multan, Punjab. He was the second son of Mohammed Zaman Khan, chief of the Abdalis. In his youth, Ahmad Shah and his elder brother, Zulfikar Khan, were imprisoned inside a fortress by Hussein Khan, the Ghilzai governor of Kandahar. Hussein Khan commanded a powerful tribe of Afghans. Conquering the eastern part of Persia a few years previously, he had threatened the power of the Safavids.

In around 1731, Nader Shah Afshar, the new ruler of Persia and founder of the Afsharid Dynasty (1736-1796), began enlisting the Abdalis in his army. After conquering Kandahar in 1737, Ahmad Khan and his brother were freed by the new Persian ruler. The Ghilzai were expelled from Kandahar and the Abdalis were allowed to settle there instead.

Serving Nader Shah

Nader Shah favored Abdali due to his young and handsome features, and gave him his title of “Dur-i-Durran” (Pearl of Pearls). Subsequently, Ahmad Khan changed the Abdali tribe’s name to the Durrani tribe. Proving himself a loyal and capable officer in Nader Shah’s service, Ahmad Khan and was promoted from a personal attendant (yasāwal) to command a cavalry of Abdali tribesmen. He then quickly rose to command a cavalry contingent estimated at four thousand strong, composed chiefly of Abdalis, in the service of the Shah on his invasion of India in 1738. Deli was sacked, and the famous Peacock Throne of the Moghul Emperors, together with the Koh-i-Noor diamond were taken back to Persia.

Popular history has it that the brilliant but megalomaniac Nader Shah could see the talent in his young commander. Later on, according to Pashtun legend, it is said that Nader Shah summoned Ahmad Khan Abdali and informed him that when he was dead, kingship in the region would pass to Ahmad Khan Abdali but that he ought to treat his (Nader Shah’s) heirs kindly. Ahmad is reported to have responded by pledging himself to serve Nader Shah as he wished, even to die for him or to be slain for him. Moreover, there was no need to express any concern about the future safety of his children.

Nader Shah’s assassination

Nader Shah’s rule abruptly ended in June 1747, when he was assassinated (probably a result of his somewhat despotic rule). The Turkoman guards involved in the assassination did so secretly so as to prevent the Abdalis from coming to their King’s rescue. However, Ahmad Khan was told that Nader Shah had been killed by one of his wives. Despite the danger of being attacked, the Abdali contingent led by Ahmad Khan rushed either to save Nader Shah or to confirm what had happened. When they reached the King’s tent, they saw Nader Shah’s dead body and severed head. Having served him so loyally, the Abdalis wept at having failed their leader, and headed back to Kandahar. On their way, the Abdalis decided that Ahmad Khan would be their new leader, and already began calling him Ahmad Shah.

Rise to power

Later the same year (1747), the chiefs of the Durrani (Abdali) tribes met near Kandahar for a Loya Jirga to choose their new leader. For nine days, serious discussions were held among the candidates in the Argah. Ahmad Shah kept silent by not campaigning for himself. At last Sabir Shah, a religious chief, emerged from his sanctuary and addressed the gathering. He told the Jirga that he could find no one who was more worthy of leadership, or who was more trustworthy and talented, than Ahmad Khan. The leaders agreed unanimously. Ahmad Khan was chosen to lead the tribes. Coins where struck for his coronation as Padshah, which occurred in October 1747, near the tomb of Shaikh Surkh, adjacent to Nadir Abad Fort.

Although younger than other claimants, several overriding factors were in his favor:

  • He was a direct descendant of Sado, patriarch of the Sadozai clan, the most prominent tribe amongst the Pashtuns at the time
  • He was unquestionably a charismatic leader and seasoned warrior who had at his disposal a trained, mobile force of several thousand cavalrymen (ability to retain power and territory was seen as a vital qualification)
  • He was the undisputed heir of Nadir Shah’s Kingdom
  • Haji Ajmal Khan, the chief of the Mohammedzais (also known as Barakzais) which were rivals of the Sadodzais, had already withdrawn from the election

One of Ahmad Khan’s (now Ahmad Shah) first act as chief was to adopt the title “Durr-i-Durrani” (“pearl of pearls” or “pearl of the age”), since Nader Afshar had always used this title for him.

Military campaigns

Following his predecessor, Ahmad Shah set up a special force closest to him consisting mostly of his fellow Durranis, Tājiks, Kizilbāshes, and Yūzufzais.

Ahmad Shah began his military conquest by capturing Ghazni from the Ghilzai Pashtuns. He then took Kabul from the local ruler, thus strengthening his hold over eastern Khorasan, comprising most of present-day Afghanistan. Leadership of the various Afghan tribes rested mainly on the ability to provide booty for the clan, and Ahmad Shah proved remarkably successful in providing both booty and military action for his followers. Apart from invading the Punjab three times between the years 1747-1753, he captured Herāt in 1750, and both Nishapur (Neyshābūr) and Mashhad in 1751.

Ahmad Shah first crossed the Indus river in 1748, the year after his ascension. His forces sacked Lahore during that expedition. The following year (1749), the Moghal ruler was induced to cede Sindh and all of the Punjab west of the Indus River to him, in order to save his capital from being attacked. Having gained substantial territories to the east without a fight, Ahmad Shah turned westward to take possession of Herat, which was at the time ruled by Nadir Shah’s grandson, Shah Rukh of Persia. The city fell to Ahmad Shah in 1750, after almost a year of siege and bloody conflict. Ahmad Shah then pushed on into present-day Iran, capturing Nishapur and Mashhad in 1751.

Meanwhile, in the preceding three years, the Sikhs had occupied the city of Lahore, so Ahmad Shah had to return in 1751, to oust them. In 1752, he invaded and reduced Kashmir.

In 1756/57, in what was his fourth invasion of India, Ahmad Shah sacked Delhi and plundered Agra, Mathura, and Vrndavana. However, he did not displace the Mughal dynasty, which remained in nominal control as long as the ruler acknowledged Ahmad’s suzerainty over the Punjab, Sindh, and Kashmir. He left the Mughal Emperor, Alamgir II, on the throne as a puppet, and arranged marriages for himself and his son Timur into the Imperial family that same year. He married a daughter of the Mughal emperor Muhammad Shah. Leaving his second son, Timur Shah (who was married to the daughter of Alamgir II), to safeguard his interests, Ahmad finally left India to return to Afghanistan. On his way back, he could not resist attacking the Goldern Temple in Amristar and filled its sarovar (sacred pool) with the blood of slaughtered cows and people. The Golden Temple’s significance and sanctity to the Sikhs can be compared with what Mecca is to the Muslims, so this act initiated a long period of bitterness between Sikhs and Afghans.

Third battle of Panipat

The Mughal power in northern India had been declining since the reign of Aurangzeb, who died in 1707. The Hindu Marathass, who already controlled much of western and central India from their capital at Pune, were straining to expand their area of control. After Ahmad Shah sacked the Moghal capital and withdrew with the booty he coveted, the Marathas filled the power void. In 1758, within a year of Ahmad Shah’s return to Kandahar, they secured possession of the Punjab, and succeeded in ousting his son Timur Shah and his court from India.

Encouraged by appeals from Muslim leaders including Shah Waliullah, Ahmad Shah chose to return to India and face the formidable challenge posed by the Maratha Confederacy. He declared a jihad (Islamic holy war) against the Marathas, and warriors from various Pashtun tribes, as well as other tribes, such as the Baloch, Tajiks, and other Muslims in India, answered his call. Early skirmishes ended in victory for the Afghans. By 1759, Ahmad Shah and his army had reached Lahore and were poised to confront the Marathas. By 1760, the Maratha groups had coalesced into a great army that probably outnumbered Ahmad Shah’s forces. Once again, Panipat was the scene of a confrontation between two warring contenders for control of northern India. The Third battle of Panipat (January 1761), fought between largely Muslim and largely Hindu armies who numbered as many as 100,000 troops each, was waged along a twelve-kilometer front. This result was a decisive victory for Ahmad Shah

Administration and government

At stated periods, Ahmad Shah held what is termed a Majlis-e-Ulema, or Assembly of the Learned, the early part of which was generally devoted to divinity and civil law—for Ahmad Shah himself was considered to be a Molawi (master) and concluded with conversations on science and poetry. As a rule, he did not meddle with the tribes or their customs as long as they did not interfere with his ambitions. He appointed a Prime Minister and a Council of Nine lifetime advisers, all of whom were leaders (sirdars) of the main tribal factions. This was a deliberate effort to overcome the tendency towards disunity and inter-tribal conflict that was then, and which still is, a characteristic of the region.

Decline

The victory at Panipat was the high point of Ahmad Shah’s and Afghan power. His empire was by now among the largest in the world at the time, second only to the Ottoman Empire in the Muslim majority world. However, this situation was not destined to last very long, and the empire soon began to unravel. As early as the end of 1761, the Sikhs were rebelling in much of the Punjab. In 1762, Ahmad Shah crossed the passes from Afghanistan for the sixth time to crush the Sikhs. He assaulted Lahore and Amritsar. Within two years, the Sikhs rebelled again, and he launched another campaign against them in 1764, resulting in a severe Sikh defeat. During his eighth Invasion of India, the Sikhs vacated Lahore, but faced Abdali’s army and general, Jahan Khan. The fear of his Indian empire falling to the Sikhs continued to obsess Ahmad Shah’s mind and he set out on another campaign against Sikhs towards the close of 1766. This was his ninth and final invasion into India. The Sikhs had recourse to their old game of hide and seek. Again, they vacated Lahore then faced squarely the Afghan general, Jahan Khan at Amritsar, forcing him to retreat. Six thousand of Abdali’s soldiers were killed. Jassa Singh Ahluwalia, with an army of about twenty thousand Sikhs, then roamed the neighborhood of the Afghan camp plundering it to his heart’s content. Ahmad Shah’s dream of capturing the whole of India was dying before his own eyes. After this, the Sikhs ruled the region up until until 1849, when they lost to the British in the Second Anglo-Sikh War.

In the spring of 1761, Ahmad Shah, returned to Kabul. From that period until the spring of 1773, he was actively employed against foreign and domestic enemies. His health, which had been declining for some time, continued to get worse, preventing him from engaging in any foreign expeditions. His complaint, a face cancer, had first afflicted him in 1764. This finally caused his death. He died at Murghah, in Afghanistan, in the beginning of June 1773, during his fiftieth year. He was succeeded by his son, Timur Shah Durrani.

Legacy

Ahmad Shah’s successors, beginning with his son, Timur, proved largely incapable of governing the Durrani empire. Faced with advancing enemies on all sides, the empire collapsed within 50 years of Ahmad Shah’s death. Much of the territory that he had conquered fell to others during this half century. Instead of sharing power with the sirdars, the later Durrani rulers alienated them by assuming absolute power and gathering advisers around them who were royal favorites rather than traditional tribal leaders. By 1818, Ahmad Shah’s heirs controlled little more than Kabul and the surrounding territory. They not only lost the outlying territories, but alienated other Pashtun tribes as well and those of other Durrani lineages. Until Dost Mohammad Khan’s ascendancy in 1826, chaos reigned in Afghanistan, which effectively ceased to exist as a single entity, disintegrating into a fragmented collection of small units.

Ahmad Shah’s victory over the Marathas also influenced the history of the subcontinents and, in particular, British policy in the region. His refusal to continue his campaigns deeper into India (and inevitably clash with the British East India Company) allowed the East India Company to continue to acquire power and influence after their acquisition of Bengal in 1757. However, fear of another Afghan invasion would long haunt British policy makers. The acknowledgments of Ahmad Shah’s military accomplishments are reflected by British intelligence reports on the battle of Panipat, which referred to Ahmad Shah as the “King of Kings.” Fear of an alliance between the French and Afghans resulted in a series of diplomatic missions to forge anti-French alliances, including one in 1798, to Persia. Mountstuart Elphinstone was sent to Afghanistan in 1808 (as the first British envoy—remaining there until 1811, when he was transferred to the Maratha capital), where he secured a treaty with the then ruler, Shah Shuja (who was overthrown very soon afterwards).

The most important historical monument in Kandahar is the mausoleum of Ahmad Shah Durrani, in which his epitaph is written:

The King of high rank, Ahmad Shah Durrani, Was equal to Kisra in managing the affairs of his government. In his time, from the awe of his glory and greatness, The lioness nourished the stag with her milk. From all sides in the ear of his enemies there arrived A thousand reproofs from the tongue of his dagger. The date of his departure for the house of mortality Was the year of the Hijra 1186 (1772 C.E.).

Elphinstone wrote of Ahmad Shah:

His military courage and activity are spoken of with admiration, both by his own subjects and the nations with whom he was engaged, either in wars or alliances. He seems to have been naturally disposed to mildness and clemency and though it is impossible to acquire sovereign power and perhaps, in Asia, to maintain it, without crimes; yet the memory of no eastern prince is stained with fewer acts of cruelty and injustice.

 

Emilio Aguinaldo y Famy


Emilio Aguinaldo y Famy (March 22, 1869 – February 6, 1964) was a Filipino general, politician, and independence leader. He played an instrumental role in Philippine independence during the Philippine Revolution against Spain and the Philippine-American War to resist American occupation. In 1895, Aguinaldo joined the Katipunan rebellion, a secret organization then led by Andrés Bonifacio, dedicated to the expulsion of the Spanish and independence of the Philippines through armed force. He quickly rose to the rank of General, and established a power base among rebel forces. Defeated by the Spanish forces, he accepted exile in December 1897. After the start of the Spanish American War, he returned to the Philippines, where he established a provisional dictatorial government and, on June 12, 1898, proclaimed Philippine independence. Soon after the defeat of the Spanish, open fighting broke out between American troops and pro-independence Filipinos. Superior American firepower drove Filipino troops away from the city, and the Malolos government had to move from one place to another. Aguinaldo eventually pledged his allegiance to the U.S. government in March of 1901, and retired from public life.

In the Philippines, Aguinaldo is considered to be the country’s first and the youngest Philippine President, though his government failed to obtain any foreign recognition.

Early life and career

The seventh of eight children of Crispulo Aguinaldo and Trinidad Famy, Emilio Aguinaldo was born into a Filipino family on March 22, 1869, in Cavite El Viejo (now Kawit), Cavite province. His father was gobernadorcillo (town head), and, as members of the Chinese-mestizo minority, his family enjoyed relative wealth and power.

At the age of two, he contracted smallpox and was given up for dead until he opened his eyes. At three, he was bitten by hundreds of ants when a relative abandoned him in a bamboo clump while hiding from some Spanish troops on mission of retaliation for the Cavite Mutiny of 1872. He almost drowned when he jumped into the Marulas River on a playmate’s dare, and found he did not know how to swim.

As a young boy, Aguinaldo received basic education from his great-aunt and later attended the town’s elementary school. In 1880, he took up his secondary course education at the Colegio de San Juan de Letran, which he quit on his third year to return home instead to help his widowed mother manage their farm.

At the age of 17, Emilio was elected cabeza de barangay of Binakayan, the most progressive barrio of Cavite El Viejo. He held this position, representing the local residents, for eight years. He also engaged in inter-island shipping, traveling as far south as the Sulu Archipelago. Once on a trading voyage to the nearby southern islands, while riding in a big paraw (sailboat with outriggers), he grappled with, subdued, and landed a large man-eating shark, thinking it was just a large fish.

In 1893, the Maura Law was passed to reorganize town governments with the aim of making them more effective and autonomous, changing the designation of town head from gobernadorcillo to capitan municipal, effective 1895. On January 1, 1895, Aguinaldo was elected town head, becoming the first person to hold the title of capitan municipal of Cavite El Viejo.

Family

His first marriage was in 1896, with Hilaria Del Rosario(1877-1921), and they had five children (Miguel, Carmen, Emilio Jr., Maria, and Cristina). His second wife was Maria Agoncillo.

Several of Aguinaldo’s descendants became prominent political figures in their own right. A grandnephew, Cesar Virata, served as Prime Minister of the Philippines from 1981 to 1986. Aguinaldo’s granddaughter, Ameurfina Melencio Herrera, served as an Associate Justice of the Supreme Court from 1979 until 1992. His great-grandson, Joseph Emilio Abaya, was elected House of Representatives to the 13th and 14th Congress, representing the 1st District of Cavite. The present mayor of Kawit, Cavite, Reynaldo Aguinaldo, is a grandson of the former president, while the vice-mayor, Emilio “Orange” Aguinaldo IV, is a great-grandson.

Philippine revolution

In 1895, Aguinaldo joined the Katipunan rebellion, a secret organization then led by Andrés Bonifacio, dedicated to the expulsion of the Spanish and independence of the Philippines through armed force. He joined as a lieutenant under Gen. Baldomero Aguinaldo and rose to the rank of general in a few months. The same week that he received his new rank, 30,000 members of the Katipunan launched an attack against the Spanish colonists. Only Emilio Aguinaldo’s troops launched a successful attack. In 1896, the Philippines erupted in revolt against the Spaniards. Aguinaldo won major victories for the Katipunan in Cavite Province, temporarily driving the Spanish out of the area. However, renewed Spanish military pressure compelled the rebels to restructure their forces in a more cohesive manner. The insulated fragmentation that had protected the Katipunan’s secrecy had outlived its usefulness. By now, the Katipunan had divided into two factions; one, the Magdalo, led by Aguinaldo and based in Kawit, thought that it was time to organize a revolutionary government to replace the Katipunan. The other, named Magdiwang and led by Bonifacio, opposed this move.

On March 22, 1897, Bonifacio presided over the Tejeros Convention in Tejeros, Cavite (deep in Baldomero Aguinaldo territory), to elect a revolutionary government in place of the Katipunan. Away from his power base, Bonifacio unexpectedly lost the leadership to Aguinaldo, and was elected instead to the office of Secretary of the Interior. Even this was questioned by an Aguinaldo supporter, who claimed that Bonifacio did not have the necessary schooling for the job. Insulted, Bonifacio declared the Convention null and void, and sought to return to his power base in Rizal. Bonifacio was charged, tried, found guilty of treason (in absentia), and sentenced to death by a Cavite military tribunal. He and his party were intercepted by Aguinaldo’s men in a violent encounter that left Bonifacio mortally wounded. Aguinaldo confirmed the death sentence, and the dying Bonifacio was hauled to the mountains of Maragondon in Cavite, and executed on May 10, 1897, even as Aguinaldo and his forces were retreating in the face of Spanish assault.

Biak-na-Bato

In June, Spanish pressure intensified, eventually forcing Aguinaldo’s revolutionary government to retreat to the village of Biak-na-Bato in the mountains. General Emilio Aguinaldo negotiated the Pact of Biak-na-Bato, which specified that the Spanish would give self-rule to the Philippines within three years if Aguinaldo went into exile. Under the pact, Aguinaldo agreed to end hostilities as well in exchange for amnesty and 800,000 pesos (Filipino money) as an indemnity. He and the other revolutionary leaders would go into voluntary exile. Another 900,000 pesos was to be given to the revolutionaries who remained in the Philippines, who agreed to surrender their arms; general amnesty would be granted and the Spaniards would institute reforms in the colony. On December 14, 1897, Aguinaldo was shipped to Hong Kong, along with some of the members of his revolutionary government. Emilio Aguinaldo was President and Mariano Trias (Vice President); other officials included Antonio Montenegro as Minister of Foreign Affairs, Isabelo Artacho as Minister of the Interior, Baldomero Aguinaldo as Minister of the Treasury, and Emiliano Riego de Dios as Minister of War.

Spanish-American War

Thousands of other Katipuneros continued to fight the Revolution against Spain for a sovereign nation. In May 1898, war broke out between Spain and the United States and a Spanish warship was sunk in Manila Bay by the fleet of U.S. Admiral George Dewey. Aguinaldo, who had already agreed to a supposed alliance with the United States through the American consul in Singapore, returned to the Philippines in May 1898, and immediately resumed revolutionary activities against the Spaniards, now receiving verbal encouragement from emissaries of the United States. In Cavite, on the advice of lawyer Ambrosio Rianzares Bautista, he established a provisional dictatorial government to “repress with a strong hand the anarchy which is the inevitable sequel of all revolutions.” On June 12, 1898, he proclaimed Philippine independence in Kawit, and began to organize local political units all over the Philippines.

From Cavite, Aguinaldo led his troops to victory after victory over the Spanish forces until they reached the city of Manila. After the surrender of the Spaniards, however, the Americans forbade the Filipinos to enter the Walled City of Intramuros. Aguinaldo convened a Revolutionary Congress at Malolos to ratify the independence of the Philippines and to draft a constitution for a republican form of government.

Presidency of the First Republic of the Philippines

Aguinaldo Cabinet

President Aguinaldo had two cabinets in the year 1899. Thereafter, the war situation resulted in his ruling by decree.

Philippine-American War

On the night of February 4, 1899, a Filipino was shot by an American sentry as he crossed Silencio Street, Sta. Mesa, Manila. This incident is considered the beginning of the Philippine-American War, and open fighting soon broke out between American troops and pro-independence Filipinos. Superior American firepower drove Filipino troops away from the city, and the Malolos government had to move from one place to another. Offers by U.S. President William McKinley to set up an autonomous Philippine government under an American flag were rejected.

Aguinaldo led resistance to the Americans, then retreated to northern Luzon with the Americans on his trail. On June 2, 1899, Gen. Antonio Luna, an arrogant but brilliant general and Aguinaldo’s looming rival in the military hierarchy, received a telegram from Aguinaldo, ordering him to proceed to Cabanatuan, Nueva Ecija, for a meeting at the Cabanatuan Church Convent. Three days later, on June 5, Luna arrived and learned that Aguinaldo was not at the appointed place. As Gen. Luna was about to depart, he was shot, then stabbed to death by Aguinaldo’s men. Luna was later buried in the churchyard; Aguinaldo made no attempt to punish or discipline Luna’s murderers.

Less than two years later, after the famous Battle of Tirad Pass and the death of his last most trusted general, Gregorio del Pilar, Aguinaldo was captured in Palanan, Isabela, on March 23, 1901, by U.S. General Frederick Funston, with the help of Macabebe trackers. The American task force gained access to Aguinaldo’s camp by pretending to be captured prisoners.

Funston later noted Aguinaldo’s “dignified bearing,” “excellent qualities,” and “humane instincts.” Aguinaldo volunteered to swear fealty to the United States, if his life was spared. Aguinaldo pledged allegiance to America on April 1, 1901, formally ending the First Republic and recognizing the sovereignty of the United States over the Philippines. He issued a manifesto urging the revolutionaries to lay down their arms. Others, like Miguel Malvar and Macario Sakay, continued to resist the American occupation.

U.S. occupation

Aguinaldo retired from public life for many years. During the United States occupation, Aguinaldo organized the Asociación de los Veteranos de la Revolución (Association of Veterans of the Revolution), which worked to secure pensions for its members and made arrangements for them to buy land on installment from the government.

When the American government finally allowed the Philippine flag to be displayed in 1919, Aguinaldo transformed his home in Kawit into a monument to the flag, the revolution, and the declaration of Independence. His home still stands, and is known as the Aguinaldo Shrine.

On March 6, 1921, his first wife died, and in 1930, he married Dona Maria Agoncillo, niece of Don Felipe Agoncillo, the pioneer Filipino diplomat.

In 1935, when the Commonwealth of the Philippines was established in preparation for Philippine independence, he ran for president but lost by a landslide to fiery Spanish mestizo, Manuel L. Quezon. The two men formally reconciled in 1941, when President Quezon moved Flag Day to June 12, to commemorate the proclamation of Philippine independence.

Aguinaldo again retired to private life, until the Japanese invasion of the Philippines in World War II. He cooperated with the Japanese, making speeches, issuing articles, and infamous radio addresses in support of the Japanese—including a radio appeal to Gen. Douglas MacArthur on Corregidor to surrender in order to spare the flower of Filipino youth. After the Americans retook the Philippines, Aguinaldo was arrested along with several others accused of collaboration with the Japanese. He was held in Bilibid prison for months until released by presidential amnesty. In his trial, it was eventually deemed that his collaboration with the Japanese was probably made under great duress, and he was released.

Aguinaldo lived to see independence granted to the Philippines July 4, 1946, when the United States Government marked the full restoration and recognition of Philippine sovereignty. He was 93 when President Diosdado Macapagal officially changed the date of independence from July 4 to June 12, 1898, the date Aguinaldo believed to be the true Independence Day. During the independence parade at the Luneta, the 93-year old general carried the flag he had raised in Kawit.

Post-American era

In 1950, President Elpidio Quirino appointed Aguinaldo as a member of the Council of State, where he served a full term. He returned to retirement soon after, dedicating his time and attention to veteran soldiers’ interests and welfare.

In 1962, when the United States rejected Philippine claims for the destruction wrought by American forces in World War II, president Diosdado Macapagal changed the celebration of Independence Day from July 4 to June 12. Aguinaldo rose from his sickbed to attend the celebration of independence 64 years after he declared it.

Aguinaldo died on February 6, 1964, of coronary thrombosis at the Veterans Memorial Hospital in Quezon City. He was 94 years old. His remains are buried at the Aguinaldo Shrine in Kawit, Cavite. When he died, he was the last surviving non-royal head of state to have served in the nineteenth century.

Legacy

Filippino historians are ambiguous about Aguinaldo’s role in the history of the Philippines. He was the leader of the revolution and the first president of the first republic, but he is criticized for ordering the execution of Andres Bonifacio and for his possible involvement in the murder of Antonio Luna, and also for accepting an indemnity payment and exile in Hong Kong. Some scholars view him as an example of the leading role taken by members of the landowning elite in the revolution.

 

Sun


The Sun is the star at the center of the Earth’s solar system. The Earth and other matter (including other planets, asteroids, comets, meteoroids, and dust) orbit the Sun, which by itself accounts for more than 99 percent of the solar system’s mass. Energy from the Sun—in the form of insolation from sunlight—supports almost all life on Earth via photosynthesis, and drives the Earth’s climate and weather.

About 74 percent of the Sun’s mass is hydrogen, 25 percent is helium, and the rest is made up of trace quantities of heavier elements. The Sun is thought to be about 4.6 billion years old and about halfway through its main-sequence evolution. Within the Sun’s core, nuclear fusion reactions take place, with hydrogen nuclei being fused into helium nuclei. Through these reactions, more than 4 million tons of matter are converted into energy each second, producing neutrinos and solar radiation. Current theory predicts that in about five billion years, the Sun will evolve into a red giant and then a white dwarf, creating a planetary nebula in the process.

The Sun is a magnetically active star. It supports a strong, changing magnetic field that varies year-to-year and reverses direction about every 11 years. The Sun’s magnetic field gives rise to many effects that are collectively called solar activity. They include sunspots on the Sun’s surface, solar flares, and variations in the solar wind that carry material through the solar system. The effects of solar activity on Earth include auroras at moderate to high latitudes, and the disruption of radio communications and electric power. Solar activity is thought to have played a large role in the formation and evolution of the solar system, and strongly affects the structure of the Earth’s outer atmosphere.

Although it is the nearest star to Earth and has been intensively studied by scientists, many questions about the Sun remain unanswered. For instance, we do not know why its outer atmosphere has a temperature of over a million K while its visible surface (the photosphere) has a temperature of just 6,000 K. Current topics of scientific inquiry include the Sun’s regular cycle of sunspot activity, the physics and origin of solar flares and prominences, the magnetic interaction between the chromosphere and the corona, and the origin of the solar wind.

The Sun is sometimes referred to by its Latin name Sol or its Greek name Helios. Its astrological and astronomical symbol is a circle with a point at its center: . Some ancient peoples of the world considered it a planet.

General information

The Sun is placed in a spectral class called G2V. “G2” means that it has a surface temperature of approximately 5,500 K, giving it a white color. As a consequence of light scattering by the Earth’s atmosphere, it appears yellow to us. Its spectrum contains lines of ionized and neutral metals, as well as very weak hydrogen lines. The “V” suffix indicates that the Sun, like most stars, is a main sequence star. This means that it generates its energy by nuclear fusion of hydrogen nuclei into helium and is in a state of hydrostatic balancemdash;neither contracting nor expanding over time. There are more than 100 million G2 class stars in our galaxy. Due to logarithmic size distribution, the Sun is actually brighter than 85 percent of the stars in the Galaxy, most of which are red dwarfs. The Sun will spend a total of approximately 10 billion years as a main sequence star. Its current age, determined using computer models of stellar evolution and nucleocosmochronology, is thought to be about 4.57 billion years. The Sun orbits the center of the Milky Way galaxy at a distance of about 25,000 to 28,000 light-years from the galactic center, completing one revolution in about 225–250 million years. The orbital speed is 220 km/s, equivalent to one light-year every 1,400 years, and one AU every 8 days.

It is suggested that the Sun is a third generation star, whose formation may have been triggered by shockwaves from a nearby supernova based on a high abundance of heavy elements such as gold and uranium in the solar system. These elements could most plausibly have been produced by endergonic nuclear reactions during a supernova, or by transmutation via neutron absorption inside a massive second-generation star.

The Sun does not have enough mass to explode as a supernova. Instead, in 4–5 billion years, it will enter a red giant phase, its outer layers expanding as the hydrogen fuel in the core is consumed and the core contracts and heats up. Helium fusion will begin when the core temperature reaches about 3×108 K. While it is likely that the expansion of the outer layers of the Sun will reach the current position of Earth’s orbit, recent research suggests that mass lost from the Sun earlier in its red giant phase will cause the Earth’s orbit to move further out, preventing it from being engulfed. However, Earth’s water and most of the atmosphere will be boiled away.

Following the red giant phase, intense thermal pulsations will cause the Sun to throw off its outer layers, forming a planetary nebula. The Sun will then evolve into a white dwarf, slowly cooling over eons. This stellar evolution scenario is typical of low- to medium-mass stars.

Sunlight is the main source of energy near the surface of Earth. The solar constant is the amount of power that the Sun deposits per unit area that is directly exposed to sunlight. The solar constant is equal to approximately 1,370 watts per square meter of area at a distance of one AU from the Sun (that is, on or near Earth). Sunlight on the surface of Earth is attenuated by the Earth’s atmosphere so that less power arrives at the surface—closer to 1,000 watts per directly exposed square meter in clear conditions when the Sun is near the zenith. This energy can be harnessed via a variety of natural and synthetic processes—photosynthesis by plants captures the energy of sunlight and converts it to chemical form (oxygen and reduced carbon compounds), while direct heating or electrical conversion by solar cells are used by solar power equipment to generate electricity or to do other useful work. The energy stored in petroleum and other fossil fuels was originally converted from sunlight by photosynthesis in the distant past.

Sunlight has several interesting biological properties. Ultraviolet light from the Sun has antiseptic properties and can be used to sterilize tools. It also causes sunburn, and has other medical effects such as the production of Vitamin D. Ultraviolet light is strongly attenuated by Earth’s atmosphere, so that the amount of UV varies greatly with latitude due the longer passage of sunlight through the atmosphere at high latitudes. This variation is responsible for many biological adaptations, including variations in human skin color in different regions of the globe.

Observed from Earth, the path of the Sun across the sky varies throughout the year. The shape described by the Sun’s position, considered at the same time each day for a complete year, is called the analemma and resembles a figure 8 aligned along a North/South axis. While the most obvious variation in the Sun’s apparent position through the year is a North/South swing over 47 degrees of angle (due to the 23.5-degree tilt of the Earth with respect to the Sun), there is an East/West component as well. The North/South swing in apparent angle is the main source of seasons on Earth.

Structure

The sun is an averaged-sized star. It contains about 99 percent of the total mass of the solar system. The volume of the Sun is 1,303,600 times that of the Earth; 71 percent of hydrogen makes up the mass of the Sun. The Sun is a near-perfect sphere, with an oblateness estimated at about 9 millionths, which means that its polar diameter differs from its equatorial diameter by only 10 km. While the Sun does not rotate as a solid body (the rotational period is 25 days at the equator and about 35 days at the poles), it takes approximately 28 days to complete one full rotation; the centrifugal effect of this slow rotation is 18 million times weaker than the surface gravity at the Sun’s equator. Tidal effects from the planets do not significantly affect the shape of the Sun, although the Sun itself orbits the center of mass of the solar system, which is located nearly a solar radius away from the center of the Sun mostly because of the large mass of Jupiter.

The Sun does not have a definite boundary as rocky planets do; the density of its gases drops approximately exponentially with increasing distance from the center of the Sun. Nevertheless, the Sun has a well-defined interior structure, described below. The Sun’s radius is measured from its center to the edge of the photosphere. This is simply the layer below which the gases are thick enough to be opaque but above which they are transparent; the photosphere is the surface most readily visible to the naked eye. Most of the Sun’s mass lies within about 0.7 radii of the center.

The solar interior is not directly observable, and the Sun itself is opaque to electromagnetic radiation. However, just as seismology uses waves generated by earthquakes to reveal the interior structure of the Earth, the discipline of helioseismology makes use of pressure waves traversing the Sun’s interior to measure and visualize the Sun’s inner structure. Computer modeling of the Sun is also used as a theoretical tool to investigate its deeper layers.

Core

The temperature of sun’s surface is about 5,800 K. The temperature at its core has been estimated about 15,000,000 K. Energy is produced in its core by nuclear fusion, converts hydrogen atoms and releases huge amounts of energy. it is the same reaction that occurs in a hydrogen bomb. The American physicist George Gamow had once calculated that if a pinhead could be brought to the same temperature, as at the core of the sun, it would set fire to everything for 100 kilometres around. At the center of the Sun, where its density reaches up to 150,000 kg/m3 (150 times the density of water on Earth), thermonuclear reactions (nuclear fusion) convert hydrogen into helium, releasing the energy that keeps the Sun in a state of equilibrium. About 8.9×1037 protons (hydrogen nuclei) are converted into helium nuclei every second, releasing energy at the matter-energy conversion rate of 4.26 million metric tons per second, 383 yottawatts (383×1024 W) or 9.15×1010 megatons of TNT per second. The fusion rate in the core is in a self-correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers, reducing the fusion rate and correcting the perturbation; and a slightly lower rate would cause the core to shrink slightly, increasing the fusion rate and again reverting it to its present level.

The core extends from the center of the Sun to about 0.2 solar radii, and is the only part of the Sun in which an appreciable amount of heat is produced by fusion; the rest of the star is heated by energy that is transferred outward. All of the energy produced by interior fusion must travel through many successive layers to the solar photosphere before it escapes into space.

The high-energy photons (gamma and X-rays) released in fusion reactions take a long time to reach the Sun’s surface, slowed down by the indirect path taken, as well as by constant absorption and reemission at lower energies in the solar mantle. Estimates of the “photon travel time” range from as much as 50 million years to as little as 17,000 years. After a final trip through the convective outer layer to the transparent “surface” of the photosphere, the photons escape as visible light. Each gamma ray in the Sun’s core is converted into several million visible light photons before escaping into space. Neutrinos are also released by the fusion reactions in the core, but unlike photons they very rarely interact with matter, so almost all are able to escape the Sun immediately. For many years measurements of the number of neutrinos produced in the Sun were much lower than theories predicted, a problem which was recently resolved through a better understanding of the effects of neutrino oscillation.

Radiation zone

From about 0.2 to about 0.7 solar radii, solar material is hot and dense enough that thermal radiation is sufficient to transfer the intense heat of the core outward. In this zone there is no thermal convection; while the material grows cooler as altitude increases, this temperature gradient is too low to drive convection. Heat is transferred by radiation—ions of hydrogen and helium emit photons, which travel a brief distance before being reabsorbed by other ions.

Convection zone

From about 0.7 solar radii to the Sun’s visible surface, the material in the Sun is not dense enough or hot enough to transfer the heat energy of the interior outward via radiation. As a result, thermal convection occurs as thermal columns carry hot material to the surface (photosphere) of the Sun. Once the material cools off at the surface, it plunges back downward to the base of the convection zone, to receive more heat from the top of the radiative zone. Convective overshoot is thought to occur at the base of the convection zone, carrying turbulent downflows into the outer layers of the radiative zone.

The thermal columns in the convection zone form an imprint on the surface of the Sun, in the form of the solar granulation and supergranulation. The turbulent convection of this outer part of the solar interior gives rise to a “small-scale” dynamo that produces magnetic north and south poles all over the surface of the Sun.

Photosphere

The visible surface of the Sun, the photosphere, is the layer below which the Sun becomes opaque to visible light. Above the photosphere visible sunlight is free to propagate into space, and its energy escapes the Sun entirely. The change in opacity is due to the decreasing amount of H ions, which absorb visible light easily. Conversely, the visible light we see is produced as electrons react with hydrogen atoms to produce H ions. Sunlight has approximately a black-body spectrum that indicates its temperature is about 6,000 K(10,340 °F / 5,727 °C), interspersed with atomic absorption lines from the tenuous layers above the photosphere. The photosphere has a particle density of about 1023/m3 (this is about 1 percent of the particle density of Earth’s atmosphere at sea level).

During early studies of the optical spectrum of the photosphere, some absorption lines were found that did not correspond to any chemical elements then known on Earth. In 1868, Norman Lockyer hypothesized that these absorption lines were due to a new element which he dubbed “helium,” after the Greek Sun god Helios. It was not until 25 years later that helium was isolated on Earth.

Atmosphere

The parts of the Sun above the photosphere are referred to collectively as the solar atmosphere. They can be viewed with telescopes operating across the electromagnetic spectrum, from radio through visible light to gamma rays, and comprise five principal zones: the temperature minimum, the chromosphere, the transition region, the corona, and the heliosphere. The heliosphere, which may be considered the tenuous outer atmosphere of the Sun, extends outward past the orbit of Pluto to the heliopause, where it forms a sharp shock front boundary with the interstellar medium. The chromosphere, transition region, and corona are much hotter than the surface of the Sun; the reason why is not yet known.

The coolest layer of the Sun is a temperature minimum region about 500 km above the photosphere, with a temperature of about 4,000 K. This part of the Sun is cool enough to support simple molecules such as carbon monoxide and water, which can be detected by their absorption spectra. Above the temperature minimum layer is a thin layer about 2,000 km thick, dominated by a spectrum of emission and absorption lines. It is called the chromosphere from the Greek root chroma, meaning color, because the chromosphere is visible as a colored flash at the beginning and end of total eclipses of the Sun. The temperature in the chromosphere increases gradually with altitude, ranging up to around 100,000 K near the top.

Above the chromosphere is a transition region in which the temperature rises rapidly from around 100,000 K to coronal temperatures closer to one million K. The increase is due to a phase transition as helium within the region becomes fully ionized by the high temperatures. The transition region does not occur at a well-defined altitude. Rather, it forms a kind of nimbus around chromospheric features such as spicules and filaments, and is in constant, chaotic motion. The transition region is not easily visible from Earth’s surface, but is readily observable from space by instruments sensitive to the far ultraviolet portion of the spectrum.

The corona is the extended outer atmosphere of the Sun, which is much larger in volume than the Sun itself. The corona merges smoothly with the solar wind that fills the solar system and heliosphere. The low corona, which is very near the surface of the Sun, has a particle density of 1014/m3-1016/m3. (Earth’s atmosphere near sea level has a particle density of about 2×1025/m3.) The temperature of the corona is several million kelvin. While no complete theory yet exists to account for the temperature of the corona, at least some of its heat is known to be due to magnetic reconnection.

The heliosphere extends from approximately 20 solar radii (0.1 AU) to the outer fringes of the solar system. Its inner boundary is defined as the layer in which the flow of the solar wind becomes superalfvénic – that is, where the flow becomes faster than the speed of Alfvén waves. Turbulence and dynamic forces outside this boundary cannot affect the shape of the solar corona within, because the information can only travel at the speed of Alfvén waves. The solar wind travels outward continuously through the heliosphere, forming the solar magnetic field into a spiral shape, until it impacts the heliopause more than 50 AU from the Sun. In December 2004, the Voyager 1 probe passed through a shock front that is thought to be part of the heliopause. Both of the Voyager probes have recorded higher levels of energetic particles as they approach the boundary.

Solar Activity

Sunspots and the solar cycle

When observing the Sun with appropriate filtration, the most immediately visible features are usually its sunspots, which are well-defined surface areas that appear darker than their surroundings due to lower temperatures. Sunspots are regions of intense magnetic activity where energy transport is inhibited by strong magnetic fields. They are often the source of intense flares and coronal mass ejections. The largest sunspots can be tens of thousands of kilometers across.

The number of sunspots visible on the Sun is not constant, but varies over a 10-12 year cycle known as the Solar cycle. At a typical solar minimum, few sunspots are visible, and occasionally none at all can be seen. Those that do appear are at high solar latitudes. As the sunspot cycle progresses, the number of sunspots increases and they move closer to the equator of the Sun, a phenomenon described by Spörer’s law. Sunspots usually exist as pairs with opposite magnetic polarity. The polarity of the leading sunspot alternates every solar cycle, so that it will be a north magnetic pole in one solar cycle and a south magnetic pole in the next.

The solar cycle has a great influence on space weather, and seems also to have a strong influence on the Earth’s climate. Solar minima tend to be correlated with colder temperatures, and longer than average solar cycles tend to be correlated with hotter temperatures. In the 17th century, the solar cycle appears to have stopped entirely for several decades; very few sunspots were observed during the period. During this era, which is known as the Maunder minimum or Little Ice Age, Europe experienced very cold temperatures. Earlier extended minima have been discovered through analysis of tree rings and also appear to have coincided with lower-than-average global temperatures.

Effects on Earth and other bodies

Solar activity has several effects on the Earth and its surroundings. Because the Earth has a magnetic field, charged particles from the solar wind cannot impact the atmosphere directly, but are instead deflected by the magnetic field and aggregate to form the Van Allen belts. The Van Allen belts consist of an inner belt composed primarily of protons and an outer belt composed mostly of electrons. Radiation within the Van Allen belts can occasionally damage satellites passing through them.

The Van Allen belts form arcs around the Earth with their tips near the north and south poles. The most energetic particles can ‘leak out’ of the belts and strike the Earth’s upper atmosphere, causing auroras, known as aurorae borealis in the northern hemisphere and aurorae australis in the southern hemisphere. In periods of normal solar activity, aurorae can be seen in oval-shaped regions centered on the magnetic poles and lying roughly at a geomagnetic latitude of 65°, but at times of high solar activity the auroral oval can expand greatly, moving towards the equator. Aurorae borealis have been observed from locales as far south as Mexico.

Solar wind also affects the surfaces of Mercury, Moon, and asteroids in the form of space weathering Because they do not have any substantial atmosphere, solar wind ions hit their surface materials and either alter the atomic structure of the materials or form a thin coating containing submicroscopic (or nanophase) metallic iron particles. The space weathering effect has been puzzling reseachers working on planetary remote geochemical analysis until recently.

Theoretical problems

Solar neutrino problem

For many years the number of solar electron neutrinos detected on Earth was only a third of the number expected, according to theories describing the nuclear reactions in the Sun. This anomalous result was termed the solar neutrino problem. Theories proposed to resolve the problem either tried to reduce the temperature of the Sun’s interior to explain the lower neutrino flux, or posited that electron neutrinos could oscillate, that is, change into undetectable tau and muon neutrinos as they traveled between the Sun and the Earth. Several neutrino observatories were built in the 1980s to measure the solar neutrino flux as accurately as possible, including the Sudbury Neutrino Observatory and Kamiokande. Results from these observatories eventually led to the discovery that neutrinos have a very small rest mass and can indeed oscillate. Moreover, the Sudbury Neutrino Observatory was able to detect all three types of neutrinos directly, and found that the Sun’s total neutrino emission rate agreed with the Standard Solar Model, although only one-third of the neutrinos seen at Earth were of the electron type.

Coronal heating problem

The optical surface of the Sun (the photosphere) is known to have a temperature of approximately 6,000 K. Above it lies the solar corona at a temperature of 1,000,000 K. The high temperature of the corona shows that it is heated by something other than the photosphere.

It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational and magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events.

Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfven waves have been found to dissipate or refract before reaching the corona.In addition, Alfven waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. One possible candidate to explain coronal heating is continuous flaring at small scales, but this remains an open topic of investigation.

Faint young sun problem

Theoretical models of the sun’s development suggest that 3.8 to 2.5 billion years ago, during the Archean period, the Sun was only about 75% as bright as it is today. Such a weak star would not have been able to sustain liquid water on the Earth’s surface, and thus life should not have been able to develop. However, the geological record demonstrates that the Earth has remained at a fairly constant temperature throughout its history, and in fact that the young Earth was somewhat warmer than it is today. The general consensus among scientists is that the young Earth’s atmosphere contained much larger quantities of greenhouse gases (such as carbon dioxide and/or ammonia) than are present today, which trapped enough heat to compensate for the lesser amount of solar energy reaching the planet.

Magnetic Field

All matter in the Sun is in the form of gas and plasma due to its high temperatures. This makes it possible for the Sun to rotate faster at its equator (about 25 days) than it does at higher latitudes (about 35 days near its poles). The differential rotation of the Sun’s latitudes causes its magnetic field lines to become twisted together over time, causing magnetic field loops to erupt from the Sun’s surface and trigger the formation of the Sun’s dramatic sunspots and solar prominences (see magnetic reconnection). This twisting action gives rise to the solar dynamo and an 11-year solar cycle of magnetic activity as the Sun’s magnetic field reverses itself about every 11 years.

The influence of the Sun’s rotating magnetic field on the plasma in the interplanetary medium creates the heliospheric current sheet, which separates regions with magnetic fields pointing in different directions. The plasma in the interplanetary medium is also responsible for the strength of the Sun’s magnetic field at the orbit of the Earth. If space were a vacuum, then the Sun’s 10-4 tesla magnetic dipole field would reduce with the cube of the distance to about 10-11 tesla. But satellite observations show that it is about 100 times greater at around 10-9 tesla. Magnetohydrodynamic (MHD) theory predicts that the motion of a conducting fluid (e.g., the interplanetary medium) in a magnetic field, induces electric currents which in turn generates magnetic fields, and in this respect it behaves like an MHD dynamo.

History of solar observation

Early understanding of the Sun

Humanity’s most fundamental understanding of the Sun is as the luminous disk in the heavens, whose presence above the horizon creates day and whose absence causes night. In many prehistoric and ancient cultures, the Sun was thought to be a solar deity or other supernatural phenomenon, and worship of the Sun was central to civilizations such as the Inca of South America and the Aztecs of what is now Mexico. Many ancient monuments were constructed with solar phenomena in mind; for example, stone megaliths accurately mark the summer solstice (some of the most prominent megaliths are located in Nabta Playa, Egypt, and at Stonehenge in England); the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumn equinoxes. With respect to the fixed stars, the Sun appears from Earth to revolve once a year along the ecliptic through the zodiac, and so the Sun was considered by Greek astronomers to be one of the seven planets (Greek planetes, “wanderer”), after which the seven days of the week are named in some languages.

Development of modern scientific understanding

One of the first people in the Western world to offer a scientific explanation for the sun was the Greek philosopher Anaxagoras, who reasoned that it was a giant flaming ball of metal even larger than the Peloponnesus, and not the chariot of Helios. For teaching this heresy, he was imprisoned by the authorities and sentenced to death (though later released through the intervention of Pericles).

Another scientist to challenge the accepted view was Nicolaus Copernicus, who in the 16th century developed the theory that the Earth orbited the Sun, rather than the other way around. In the early 17th century, Galileo pioneered telescopic observations of the Sun, making some of the first known observations of sunspots and positing that they were on the surface of the Sun rather than small objects passing between the Earth and the Sun. Sir Isaac Newton observed the Sun’s light using a prism, and showed that it was made up of light of many colors, while in 1800 William Herschel discovered infrared radiation beyond the red part of the solar spectrum. The 1800s saw spectroscopic studies of the Sun advance, and Joseph von Fraunhofer made the first observations of absorption lines in the spectrum, the strongest of which are still often referred to as Fraunhofer lines.

In the early years of the modern scientific era, the source of the Sun’s energy was a significant puzzle. Among the proposals were that the Sun extracted its energy from friction of its gas masses, or that its energy was derived from gravitational potential energy released as it continuously contracted. Either of these sources of energy could only power the Sun for a few million years at most, but geologists were showing that the Earth’s age was several billion years. Nuclear fusion was first proposed as the source of solar energy only in the 1930s, when Hans Bethe calculated the details of the two main energy-producing nuclear reactions that power the Sun.

Solar space missions

The first satellites designed to observe the Sun were NASA’s Pioneers 5, 6, 7, 8 and 9, which were launched between 1959 and 1968. These probes orbited the Sun at a distance similar to that of the Earth’s orbit, and made the first detailed measurements of the solar wind and the solar magnetic field. Pioneer 9 operated for a particularly long period of time, transmitting data until 1987.

In the 1970s, Helios 1 and the Skylab Apollo Telescope Mount provided scientists with significant new data on solar wind and the solar corona. The Helios 1 satellite was a joint U.S.-German probe that studied the solar wind from an orbit carrying the spacecraft inside Mercury’s orbit at perihelion. The Skylab space station, launched by NASA in 1973, included a solar observatory module called the Apollo Telescope Mount that was operated by astronauts resident on the station. Skylab made the first time-resolved observations of the solar transition region and of ultraviolet emissions from the solar corona. Discoveries included the first observations of coronal mass ejections, then called “coronal transients,” and of coronal holes, now known to be intimately associated with the solar wind.

In 1980, the Solar Maximum Mission was launched by NASA. This spacecraft was designed to observe gamma rays, X-rays and UV radiation from solar flares during a time of high solar activity. Just a few months after launch, however, an electronics failure caused the probe to go into standby mode, and it spent the next three years in this inactive state. In 1984 Space Shuttle Challenger mission STS-41C retrieved the satellite and repaired its electronics before re-releasing it into orbit. The Solar Maximum Mission subsequently acquired thousands of images of the solar corona before re-entering the Earth’s atmosphere in June 1989.

Japan’s Yohkoh (Sunbeam) satellite, launched in 1991, observed solar flares at X-ray wavelengths. Mission data allowed scientists to identify several different types of flares, and also demonstrated that the corona away from regions of peak activity was much more dynamic and active than had previously been supposed. Yohkoh observed an entire solar cycle but went into standby mode when an annular eclipse in 2001 caused it to lose its lock on the Sun. It was destroyed by atmospheric reentry in 2005.

One of the most important solar missions to date has been the Solar and Heliospheric Observatory, jointly built by the European Space Agency and NASA and launched on December 2, 1995. Originally a two-year mission, SOHO has now operated for over ten years (as of 2006). It has proved so useful that a follow-on mission, the Solar Dynamics Observatory, is planned for launch in 2008. Situated at the Lagrangian point between the Earth and the Sun (at which the gravitational pull from both is equal), SOHO has provided a constant view of the Sun at many wavelengths since its launch. In addition to its direct solar observation, SOHO has enabled the discovery of large numbers of comets, mostly very tiny sungrazing comets which incinerate as they pass the Sun.

All these satellites have observed the Sun from the plane of the ecliptic, and so have only observed its equatorial regions in detail. The Ulysses probe was launched in 1990 to study the Sun’s polar regions. It first traveled to Jupiter, to ‘slingshot’ past the planet into an orbit which would take it far above the plane of the ecliptic. Serendipitously, it was well-placed to observe the collision of Comet Shoemaker-Levy 9 with Jupiter in 1994. Once Ulysses was in its scheduled orbit, it began observing the solar wind and magnetic field strength at high solar latitudes, finding that the solar wind from high latitudes was moving at about 750 km/s (slower than expected), and that there were large magnetic waves emerging from high latitudes which scattered galactic cosmic rays.

Elemental abundances in the photosphere are well known from spectroscopic studies, but the composition of the interior of the Sun is more poorly understood. A solar wind sample return mission, Genesis, was designed to allow astronomers to directly measure the composition of solar material. Genesis returned to Earth in 2004 but was damaged by a crash landing after its parachute failed to deploy on reentry into Earth’s atmosphere. Despite severe damage, some usable samples have been recovered from the spacecraft’s sample return module and are undergoing analysis.

Sun observation and eye damage

Sunlight is very bright, and looking directly at the Sun with the naked eye for brief periods can be painful, but is generally not hazardous. Looking directly at the Sun causes phosphene visual artifacts and temporary partial blindness. It also delivers about 4 milliwatts of sunlight to the retina, slightly heating it and potentially (though not normally) damaging it. UV exposure gradually yellows the lens of the eye over a period of years and can cause cataracts, but those depend on general exposure to solar UV, not on whether one looks directly at the Sun.

Viewing the Sun through light-concentrating optics such as binoculars is very hazardous without an attenuating (ND) filter to dim the sunlight. Using a proper filter is important as some improvised filters pass UV rays that can damage the eye at high brightness levels. Unfiltered binoculars can deliver over 500 times more sunlight to the retina than does the naked eye, killing retinal cells almost instantly. Even brief glances at the midday Sun through unfiltered binoculars can cause permanent blindness. One way to view the Sun safely is by projecting an image onto a screen using binoculars or a small telescope.

Partial solar eclipses are hazardous to view because the eye’s pupil is not adapted to the unusually high visual contrast: the pupil dilates according to the total amount of light in the field of view, not by the brightest object in the field. During partial eclipses most sunlight is blocked by the Moon passing in front of the Sun, but the uncovered parts of the photosphere have the same surface brightness as during a normal day. In the overall gloom, the pupil expands from ~2 mm to ~6 mm, and each retinal cell exposed to the solar image receives about ten times more light than it would looking at the non-eclipsed sun. This can damage or kill those cells, resulting in small permanent blind spots for the viewer. The hazard is insidious for inexperienced observers and for children, because there is no perception of pain: it is not immediately obvious that one’s vision is being destroyed.

During sunrise and sunset, sunlight is attenuated through rayleigh and mie scattering of light by a particularly long passage through Earth’s atmosphere, and the direct Sun is sometimes faint enough to be viewed directly without discomfort or safely with binoculars. Hazy conditions, atmospheric dust, and high humidity contribute to this atmospheric attenuation.

 

Mary I of Scotland


Mary I of Scotland (Mary Stuart, popularly known as Mary, Queen of Scots); (December 8, 1542–February 8, 1587) was the Queen of Scots (the monarch of the Kingdom of Scotland) from December 14, 1542 to July 24, 1567. She also sat as Queen Consort of France from July 10, 1559 to December 5, 1560. Because of her tragic life, she is one of the best-known Scottish monarchs. To prevent the Scottish from becoming the dynastic family of Europe, Elizabeth I of England ordered the execution of Mary to prevent her from taking the throne. In the eyes of many Catholics, Elizabeth was illegitimate as the daughter of illegal union between divorced Henry VIII of England and his second wife Anne Boleyn. Mary Stuart became a martyr to obsessive ambition and a misguided and perverse blend of politics and religion. Nevertheless, it was her son that became James VI of Scotland/James I of England and Ireland, the first to style himself King of Great Britain.

Early Life

Princess Mary Stuart was born at Linlithgow Palace, Linlithgow, West Lothian, Scotland to King James V of Scotland and his French wife, Marie de Guise. In Falkland Palace, Fife, her father heard of the birth and prophesied, “The devil go with it! It came with a lass, it will pass with a lass!” James truly believed that Mary’s birth marked the end of the Stuarts’ reign over Scotland. Instead, through Mary’s son, it was the beginning of their reign over both the Kingdom of Scotland and the Kingdom of England.

The six-day-old Mary became Queen of Scotland when her father died at the age of 30. James Hamilton, second Earl of Arran was the next in line for the throne after Mary; he acted as regent for Mary until 1554, when he was succeeded by the Queen’s mother, who continued as regent until her death in 1560.

In July 1543, when Mary was six months old, the Treaties of Greenwich promised Mary to be married to Edward, son of King Henry VIII of England in 1552, and for their heirs to inherit the Kingdoms of Scotland and England. Mary’s mother was strongly opposed to the proposition, and she hid with Mary two months later in Stirling Castle, where preparations were made for Mary’s coronation.

When Mary was only nine months old she was crowned Queen of Scotland in the Chapel Royal at Stirling Castle on September 9, 1543. Because the Queen was an infant and the ceremony unique, Mary’s coronation was the talk of Europe. She was magnificently dressed for the occasion in an elaborate satin jeweled gown beneath a red velvet mantle, trimmed with ermine. Unable to yet walk she was carried by Lord Livingston in solemn procession to the Chapel Royal. Inside, Lord Livingston brought Mary forward to the altar, put her gently in the throne set up there, and stood by holding her to keep her from rolling off.

Quickly, Cardinal David Beaton put the Coronation Oath to her, which Lord Livingston answered for her. The Cardinal immediately unfastened Mary’s heavy robes and began anointing her with the holy oil. The Scepter was brought forth and placed it in Mary’s hand, and she grasped the heavy shaft. Then the Sword of State was presented by the Earl of Argyll, and the Cardinal performed the ceremony of girding the three-foot sword to the tiny body.

The Earl of Arran delivered the royal Crown to Cardinal Beaton who placed it gently onto the child’s head. The Cardinal steadied the crown as the kingdom came up and knelt before the tiny queen placing their hands on her crown and swearing allegiance to her.

The “rough wooing”

The Treaties of Greenwich fell apart soon after Mary’s coronation. The betrothal did not sit well with the Scots, especially since King Henry VIII suspiciously tried to change the agreement so that he could possess Mary years before the marriage was to take place. He also wanted them to break their traditional alliance with France. Fearing an uprising among the people, the Scottish Parliament broke off the treaty and the engagement at the end of the year.

Henry VIII then began his “rough wooing” designed to impose the marriage to his son on Mary. This consisted of a series of raids on Scottish territory and other military actions. It lasted until June 1551, costing over half a million pounds and many lives. In May of 1544, the English Earl of Hertford arrived in the Firth of Forth hoping to capture the city of Edinburgh and kidnap Mary, but Marie de Guise hid her in the secret chambers of Stirling Castle.

On September 10, 1547, known as “Black Saturday,” the Scots suffered a bitter defeat at the Battle of Pinkie Cleugh. Marie de Guise, fearful for her daughter, sent her temporarily to Inchmahome Priory, and turned to the French ambassador Monsieur D’Oysel.

The French, remaining true to the Auld Alliance, came to the aid of the Scots. The new French King, Henri II, was now proposing to unite France and Scotland by marrying the little Queen to his newborn son, the Dauphin François. This seemed to Marie to be the only sensible solution to her troubles. In February 1548, hearing that the English were on their way back, Marie moved Mary to Dumbarton Castle. The English left a trail of devastation behind once more and seized the strategically located town of Haddington. By June, the much awaited French help had arrived. On July 7, the French Marriage Treaty was signed at a nunnery near Haddington.

Childhood in France

With her marriage agreement in place, five-year-old Mary was sent to France in 1548 to spend the next ten years at the French court. Henri II had offered to guard her and raise her. On August 7, 1548, the French fleet sent by Henri II sailed back to France from Dumbarton carrying the five-year-old Queen of Scotland on board. She was accompanied by her own little court consisting of two lords, two half brothers, and the “four Marys,” four little girls her own age, all named Mary, and the daughters of the noblest families in Scotland: Beaton, Seton, Fleming, and Livingston.

Vivacious, pretty, and clever, Mary had a promising childhood. While in the French court, she was a favorite. She received the best available education, and at the end of her studies, she had mastered French, Latin, Greek, Spanish and Italian in addition to her native Scots. She also learned how to play two instruments and learned prose, horsemanship, falconry, and needlework.

On April 24, 1558, she married the Dauphin François at Notre Dame de Paris. When Henri II died on July 10, 1559, Mary became Queen Consort of France; her husband became François II of France.

Claim to the English throne

After the death of Henry VIII’s elder daughter, Queen Mary I of England, in November 1558, she was succeeded by her only surviving sibling, Elizabeth I. Under the Third Succession Act, passed in 1543 by the Parliament of England, Elizabeth was the heir of Mary I of England.

Under the ordinary laws of succession, Mary was next in line to the English throne after her cousin, Elizabeth I, who was childless. In the eyes of many Catholics Elizabeth was illegitimate, making Mary the true heir. However, Henry VIII’s last will and testament had excluded the Stuarts from succeeding to the English throne.

Mary’s troubles were still further increased by the Huguenot rising in France, called le tumulte d’Amboise (March 6–17, 1560), making it impossible for the French to help Mary’s side in Scotland. The question of the succession was therefore a real one.

Religious divide

François died on December 5, 1560. Mary’s mother-in-law, Catherine de Medici, became regent for the late king’s brother Charles IX, who inherited the French throne. Under the terms of the Treaty of Edinburgh, signed by Mary’s representatives on July 6, 1560 following the death of Marie of Guise, France undertook to withdraw troops from Scotland and recognize Mary’s right to rule England. The 18-year-old Mary, still in France, refused to ratify the treaty.

Mary returned to Scotland soon after her husband’s death and arrived in Leith on August 19, 1561. Despite her talents, Mary’s upbringing had not given her the judgment to cope with the dangerous and complex political situation in Scotland at the time.

Mary, being a devout Roman Catholic, was regarded with suspicion by many of her subjects as well as by Elizabeth, who was her father’s cousin and the monarch of the neighboring Protestant country of England. Scotland was torn between Catholic and Protestant factions, and Mary’s illegitimate half-brother, James Stewart, First Earl of Moray, was a leader of the Protestant faction. The Protestant reformer John Knox also preached against Mary, condemning her for hearing Mass, dancing, dressing too elaborately, and many other things, real and imagined.

To the disappointment of the Catholic party, however, Mary did not hasten to take up the Catholic cause. She tolerated the newly-established Protestant ascendancy, and kept James Stewart as her chief adviser. In this, she may have had to acknowledge her lack of effective military power in the face of the Protestant Lords. She joined with James in the destruction of Scotland’s leading Catholic magnate, Lord Huntly, in 1562.

Mary was also having second thoughts about the wisdom of having crossed Elizabeth, and she attempted to make up the breach by inviting Elizabeth to visit Scotland. Elizabeth refused, and the bad blood remained between them.

Marriage to Darnley

At Holyrood Palace on July 29, 1565, Mary married Henry Stuart, Lord Darnley, a descendant of King Henry VII of England and Mary’s first cousin. The union infuriated Elizabeth, who felt she should have been asked permission for the marriage to even take place, as Darnley was an English subject. Elizabeth also felt threatened by the marriage, because Mary’s and Darnley’s Scottish and English royal blood would produce children with extremely strong claims to both Mary’s and Elizabeth’s thrones.

In 1566 Mary gave birth to a son, James. Before long a plot was hatched to remove Darnley, who was already ill. He was recuperating in a house in Edinburgh where Mary visited him frequently. In February 1567 an explosion occurred in the house, and Darnley was found dead in the garden, apparently of strangulation. This event, which should have been Mary’s salvation, only harmed her reputation. James Hepburn, Fourth Earl of Bothwell, an adventurer who would become her third husband, was generally believed to be guilty of the assassination, and was brought before a mock trial but acquitted. Mary attempted to regain support among her Lords while Bothwell convinced some of them to sign the Ainslie Tavern Bond, in which they agreed to support his claims to marry Mary.

Abdication and imprisonment

On April 24, 1567, Mary visited her son at Stirling for the last time. On her way back to Edinburgh Mary was abducted by Bothwell and his men and taken to Dunbar Castle. On May 6 they returned to Edinburgh and on May 15, at Holyrood Palace, Mary and Bothwell were married according to Protestant rites.

The Scottish nobility turned against Mary and Bothwell and raised an army against them. The Lords took Mary to Edinburgh and imprisoned her in Loch Leven Castle. On July 24, 1567, she was forced to abdicate the Scottish throne in favor of her one-year-old son James.

On May 2, 1568, Mary escaped from Loch Leven and once again managed to raise a small army. After her army’s defeat at the Battle of Langside on May 13, she fled to England. When Mary entered England on May 19, she was imprisoned by Elizabeth’s officers at Carlisle.

Elizabeth ordered an inquiry into Darnley’s murder which was held in York. Mary refused to acknowledge the power of any court to try her since she was an anointed Queen. The man ultimately in charge of the prosecution, James Stewart, Earl of Moray, was ruling Scotland in Mary’s absence. His chief motive was to keep Mary out of Scotland and her supporters under control. Mary was not permitted to see them or to speak in her own defense at the tribunal. She refused to offer a written defense unless Elizabeth would guarantee a verdict of not guilty, which Elizabeth would not do.

The inquiry hinged on the “The Casket Letters,” eight letters purportedly from Mary to Bothwell, reported by James Douglas, Fourth Earl of Morton to have been found in Edinburgh in a silver box engraved with an F (supposedly for Francis II), along with a number of other documents, including the Mary/Bothwell marriage certificate. The authenticity of the Casket Letters has been the source of much controversy among historians. Mary argued that her handwriting was not difficult to imitate, and it has frequently been suggested that the letters are complete forgeries, that incriminating passages were inserted before the inquiry, or that the letters were written to Bothwell by some other person. Comparisons of writing style have often concluded that they were not Mary’s work.

Elizabeth considered Mary’s designs on the English throne to be a serious threat, and so 18 years of confinement followed. Bothwell was imprisoned in Denmark, became insane, and died in 1578, still in prison.

In 1570, Elizabeth was persuaded by representatives of Charles IX of France to promise to help Mary regain her throne. As a condition, she demanded the ratification of the Treaty of Edinburgh, something Mary would still not agree. Nevertheless, William Cecil, First Baron Burghley, continued negotiations with Mary on Elizabeth’s behalf.

The Ridolfi Plot, which attempted to unite Mary and the Duke of Norfolk in marriage, caused Elizabeth to reconsider. With the queen’s encouragement, Parliament introduced a bill in 1572 barring Mary from the throne. Elizabeth unexpectedly refused to give it the royal assent. The furthest she ever went was in 1584, when she introduced a document (the “Bond of Association”) aimed at preventing any would-be successor from profiting from her murder. It was not legally binding, but was signed by thousands, including Mary herself.

Mary eventually became a liability that Elizabeth could no longer tolerate. Elizabeth did ask Mary’s final custodian, Amias Paulet, if he would contrive some accident to remove Mary. He refused on the grounds that he would not allow such “a stain on his posterity.” Mary was implicated in several plots to assassinate Elizabeth and put herself on the throne, possibly with French or Spanish help. The major plot for the political takeover was the Babington Plot, but some of Mary’s supporters believed it and other plots to be either fictitious or undertaken without Mary’s knowledge.

Trial and execution

Mary was put on trial for treason by a court of about 40 noblemen, some Catholic, after being implicated in the Babington Plot and after having allegedly sanctioned the assassination of Elizabeth. Mary denied the accusation and was spirited in her defense. She drew attention to the fact that she was denied the opportunity of reviewing the evidence or her papers that had been removed from her, that she had been denied access to legal counsel, and that she had never been an English subject and thus could not be convicted of treason. The extent to which the plot was created by Sir Francis Walsingham and the English Secret Services will always remain open to conjecture.

In a trial presided over by England’s Chief of Justice, Sir John Popham, Mary was ultimately convicted of treason, and was beheaded at Fotheringay Castle, Northamptonshire on February 8, 1587. She had spent the last hours of her life in prayer and also writing letters and her will. She expressed a request that her servants should be released. She also requested that she should be buried in France.

In response to Mary’s death, the Spanish Armada sailed to England to depose Elizabeth, but it lost a considerable number of ships in the Battle of Gravelines and ultimately retreated without touching English soil.

Mary’s body was embalmed and left unburied at her place of execution for a year after her death. Her remains were placed in a secure lead coffin. She was initially buried at Peterborough Cathedral in 1588, but her body was exhumed in 1612 when her son, King James I of England, ordered she be re-interred in Westminster Abbey. It remains there, along with at least 40 other descendants, in a chapel on the other side of the Abbey from the grave of her cousin Elizabeth. In the 1800s her tomb and that of Elizabeth I were opened to try to ascertain where James I was buried; he was ultimately found buried with Henry VII.

 

John Hancock


John Hancock (January 12, 1737 – October 8, 1793) was an American leader, politician, writer, political philosopher and one of the Founding Fathers of the United States. Hancock was President of the Second Continental Congress and of the Congress of the Confederation. He served as the first governor of Massachusetts following the secession from England. He was the first person to sign the Declaration of Independence and he played an instrumental role—sometimes by accident, other times by design—in provoking the American Revolutionary War.

Born to privilege and wealth, Hancock used his money to foster the cause for independence from British rule. It was under his leadership as president that the Continental Congress evacuated Philadelphia when the rebellion was in grave danger during 1776 and relocated to the countryside in Newton, Pennsylvania. Throughout his adult life, Hancock gave of himself tirelessly to the cause of human liberty.

Early life

Hancock was born in Braintree, Massachusetts, in a part of town which eventually became the separate city of Quincy, Massachusetts. His father died when he was young, and he was adopted by his paternal uncle Thomas Hancock, a highly successful merchant in New England. After graduating from Boston Latin School, he attended Harvard University and received a business degree in 1754, when he was 17. Upon graduation, he worked for his uncle. From 1760–1764, Hancock lived in England while building relationships with customers and suppliers of his uncle’s shipbuilding business. Shortly after his return from England, his uncle died and he inherited the fortune and business, making him the wealthiest man in New England at the time.

Hancock married Dorothy Quincy. Quincy’s aunt, also named Dorothy Quincy, was the great-grandmother of Oliver Wendell Holmes.

The couple had two children, neither of whom survived to adulthood.

Early career

A Boston selectman and representative to the Massachusetts General Court, his colonial trade business naturally disposed him to resist the Stamp Act, which attempted to restrict colonial trading.

The Stamp Act was repealed, but later acts (such as the Townshend Acts) led to further taxation on common goods. Eventually, Hancock’s shipping practices became more evasive, and he began to smuggle glass, lead, paper and tea. In 1768, upon arriving from England, his ship Liberty was impounded by British customs officials for violation of revenue laws. This caused a riot among some infuriated Bostonians, depending as they did on the supplies on board.

His regular merchant trade as well as his smuggling practices financed much of his region’s resistance to British authority and his financial contributions led the people of Boston to joke that “Sam Adams writes the letters [to newspapers] and John Hancock pays the postage”.

American Revolution

At first only a financier of the growing rebellion, he later became a public critic of British rule. On March 5, 1774, the fourth anniversary of the Boston Massacre, he gave a speech strongly condemning the British. In the same year, he was unanimously elected president of the Provisional Congress of Massachusetts, and presided over its Committee of Safety. Under Hancock, Massachusetts was able to raise bands of “minutemen”—soldiers who pledged to be ready for battle in a minute’s notice—and his boycott of tea imported by the British East India Company eventually led to the Boston Tea Party.

In April 1775, as the British intent became apparent, Hancock and Samuel Adams slipped away from Boston to elude capture, staying in the Hancock-Clarke House in Lexington, Massachusetts. There, Paul Revere roused them about midnight before the British troops arrived at dawn for the Battle of Lexington and Concord. At this time, General Thomas Gage ordered Hancock and Adams arrested for treason. Following the battle a proclamation was issued granting a general pardon to all who would demonstrate loyalty to the crown—with the exceptions of Hancock and Adams.

On May 24, 1775, he was elected the third president of the Second Continental Congress, succeeding Peyton Randolph. He would serve until October 30, 1777, when he was himself succeeded by Henry Laurens.

In the first month of his presidency, on June 19, 1775, Hancock commissioned George Washington as commander-in-chief of the Continental Army. A year later, Hancock sent Washington a copy of the July 4, 1776 congressional resolution calling for independence as well as a copy of the Declaration of Independence.

Hancock was the only one to sign the Declaration of Independence on July 4; the other 55 delegates signed on August 2. He also requested Washington have the declaration read to the Continental Army. According to popular legend, he signed his name largely and clearly to be sure King George III could read it without his spectacles, causing his name to become, in the United States, an eponym for “signature.”

From 1780–1785, he was governor of Massachusetts. Hancock’s skills as orator and moderator were much admired, but during the American Revolution he was most often sought out for his ability to raise funds and supplies for American troops. Despite his skill in the merchant trade, even Hancock had trouble meeting the Continental Congress’s demand for beef cattle to feed the hungry army. On January 19, 1781, General Washington warned Hancock:

I should not trouble your Excellency, with such reiterated applications on the score of supplies, if any objects less than the safety of these Posts on this River, and indeed the existence of the Army, were at stake. By the enclosed Extracts of a Letter, of Yesterday, from Major Gen. Heath, you will see our present situation, and future prospects. If therefore the supply of Beef Cattle demanded by the requisitions of Congress from Your State, is not regularly forwarded to the Army, I cannot consider myself as responsible for the maintenance of the Garrisons below West Point, New York, or the continuance of a single Regiment in the Field. (United States Library of Congress, 1781)

Hancock continued to serve as governor of Massachusetts until his death in 1793. He was interred at the Granary Burying Ground in Boston

Jan Hus


Jan Hus, also known as John Huss (c. 1369 – 1415) was a Czech (living in the area then known as Bohemia) religious thinker, philosopher, and reformer, master at Charles University in Prague. His followers became known as Hussites. The Roman Catholic Church considered his teachings heretical. Hus was excommunicated in 1411, condemned by the Council of Constance, and burned at the stake on July 6, 1415, in Konstanz (Constance), Germany.

Hus was a precursor to the Protestant movement and many of his ideas anticipated those of Martin Luther. He was, though, an even more radical critic than most subsequent reformers of the relationship between the Christian church and use of military force, condemning the churches blessing of crusades, which even Francis of Assisi did not do so unequivocally. His extensive writings earn him a prominent place in Czech literary history.

 

Early life and studies

John Hus was born at Husinec (Prague-East District) (75 kilometers southwest of Prague) in or around the year of 1369. His father was a wealthy farmer. He attended the university and gained his master’s degree in 1396. He started to teach in 1398, and was ordained as a priest in 1400. He became familiar with the ideas of John Wycliffe following the marriage of England’s Richard II with Anne of Bohemia. In 1401 Hus became dean of the faculty of philosophy, then rector of the university in 1402-3. He also became curate (capellarius) of the university’s Bethlehem Chapel, where he preached in the Czech language. This was itself enough to earn controversy. In 1405, he wrote De Omni Sanguine Christi Glorificato, in which urged Christians to desist from looking for miracles as signs of Christ’s presence, but rather to seek him in his word. Huss had just taken part in an official investigation into the authenticity of alleged miracles at Wilsnack, near Wittenberg, which was attracting a lot of pilgrims from Bohemia. He declared the miracles to be a hoax, and pilgrimage from Bohemia was subsequently banned. Huss was now a popular preacher in the churches, so much so that he was on several occasions invited, with his friend Stanislaus of Znaim, to preach at the synod (hierarchical gatherings to discuss church affairs).

He was also responsible for introducing the use of diacritics (especially the inverted hat, háček) into Czech spelling in order to represent each sound by a single symbol, and is credited with fostering a sense of Czech identity.

Papal schism

The University of Prague, founded in 1348, served the whole Holy Roman Empire, was being torn apart by the ongoing papal schism, in which Pope Gregory XII in Rome and Pope Benedict XIII based in Avignon, France both laid claim to the papacy.

King Wenceslaus of Bohemia felt Pope Gregory XII might interfere with his own plans to be crowned Holy Roman Emperor; thus, he renounced Gregory and ordered his prelates to observe strict neutrality toward both popes. He also said that he expected the same of the university. Archbishop Zbyněk Zajíc remained faithful to Gregory, however, and at the university it was only the “Bohemian nation” (one of four voting blocs), with Hus as its leader and spokesman, which avowed neutrality. The other nations were those of the Saxons, Czechs and Poles.

Kutná Hora

In response, Wenceslaus, at the instigation of Hus and other Bohemian leaders, issued a decree dated January 18, 1409, that the Bohemian nation should now have three votes (instead of one) in all affairs of the university, while the foreign nations, principally Germany, should have only one vote. As a consequence somewhere between five and twenty thousand German doctors, masters, and students left the university in 1409, going on to found the University of Leipzig, among others. Prague then lost its international importance, becoming a Czech school. Hus was elected first rector of the new university.

The archbishop was now isolated, while Hus was at the height of his fame.

Alexander V becomes Pope

In 1409 in an attempt to end the papal schism, the Council of Pisa, met to elect a new pope, Alexander V, who would usurp the other two. This did not succeed, since many people remained loyal to one of the other two popes, so effectively the council merely added a third contender. Pope Alexander V is himself now considered an antipope. Hus and his followers, as well King Wenceslaus, did choose to transfer their allegiance to Alexander V. Under pressure from Wenceslaus, archbishop Zbyněk eventually did the same but he did not change his attitude towards Hus, whose Wyclifite sympathies he considered dangerous. He now took his complaints to Alexander V, accusing the Wyclifites of causing dissension and strife within the church.

Excommunication of Hus

Alexander V issued his papal bull of December 20, 1409, which empowered the archbishop to proceed against Wyclifism—Wycliffe’s books were surrendered, his doctrines (usually referred to as the 45 articles) revoked, and free preaching was to be discontinued. After the publication of the bull in 1410, Hus appealed to Alexander V, but in vain; all books and valuable manuscripts of Wycliffe were burned. In protest, riots broke out in parts of Bohemia. Hus was included in the terms of the bull, as a known Wyclifite.

The government supported Hus, whose influence and popularity was rapidly increasing. He continued to preach in the Bethlehem Chapel, and became bolder and bolder in his accusations against the church. The pope responded by banning worship in all of the city’s churches and by forbidding burial on consecrated land. Few people took any notice, and it certainly did not silence Hus. The magistrates and other city leaders who supported Hus were also excommunicated.

Crusade Against Naples

In 1411 John XXIII, who had succeeded Alexander V, issued a crusade against King Ladislaus of Naples, Gregory XII’s protector. Crusade was the official term used for a holy war to root out and destroy heresy, or the enemies of Christendom. Preachers urged people to crowd the churches and give generously, and also to purchase indulgences to fund the crusade, and traffic in indulgences quickly developed.

Condemnation of Indulgences and Crusade

Hus, Wycliffe’s example, immediately condemned indulgences, as later would Martin Luther. Hus also denounced the crusade. In 1412, he delivered his Quaestio magistri Johannis Hus de indulgentiis, which was taken literally from the last chapter of Wycliffe’s book, De ecclesia, and his treatise, De absolutione a pena et culpa. The pamphlet stated that no pope or bishop had the right to take up the sword in the name of the church; he should pray for his enemies and bless those that curse him; man obtains forgiveness of sins by real repentance, not through money.

The doctors of the theological faculty replied, but without success. A few days afterward some of Hus’s followers, led by Vok Voksa z Valdštejna, burned the papal bulls; Hus, they said, should be obeyed rather than the church, which they considered a fraudulent mob of adulterers and Simonists.

Response

That year, three young Hussites who openly contradicted the preachers during their sermons and called indulgences a fraud, were beheaded. Later, there were considered the first martyrs of the Hussite Church.

In the meantime, the faculty had renewed their condemnation of the forty-five articles and added several other heretical ideas associated with Hus. The king forbade the teaching of these articles, but neither Hus nor the university complied with the ruling, requesting that the un-scriptural nature of the articles should be first proven. Hus himself never said that he agreed with the forty-five articles, only that they should be discussed before being condemned.

Further dissentions

The situation at Prague had stirred up a sensation, unpleasant for the Roman party; papal legates and Archbishop Albik tried to persuade Hus to give up his opposition to the papal bulls, and the king made an unsuccessful attempt to reconcile the two parties.

Call for arrest of Hus

The clergy of Prague now took their complaints to the pope, who ordered the Cardinal of St. Angelo to proceed against Hus without mercy. The cardinal placed him under a ban, which meant that he was to be seized and delivered to the archbishop, and his chapel was to be destroyed. This was followed by stricter measures against Hus and his followers, and in turn by counter-measures of the Hussites, including an appeal by Hus that Jesus Christ—and not the pope—was the supreme judge. This intensified popular excitement. Anyone found sheltering Hus was now liable to be executed. Even his closest supporters on the faculty, Stanislav ze Znojma and Štěpán Páleč, distanced themselves from him at this time. The interdict against him was renewed in June 1412. Consequently, Hus agreed to leave Prague for Kozihradek, where he engaged in open-air preaching and in copious correspondence, some of which survives.

Attempted reconcilliation

The king, aware that further strife would be damaging, tried once again to harmonize the opposing parties. In 1412 he summoned the lay and religious leaders for a consultation, and at their suggestion ordered a synod to be held at Český Brod on February 2, 1412, supposedly to reconcile the Hussites and the church. It did not take place there. Instead, in a deliberate attempt to exclude Hus, despite the declared aim of reconciliation it met in the palace of the archbishops at Prague.

Propositions were made for the restitution of the peace of the church, Hus demanding especially that Bohemia should have the same freedom in regard to ecclesiastical affairs as other countries and that approbation and condemnation should therefore be announced only with the permission of the state power. This is wholly the doctrine of Wycliffe (Sermones, iii. 519, etc.). There followed treatises from both parties, but no agreement was reached. “Even if I should stand before the stake which has been prepared for me,” Hus wrote at the time, “I would never accept the recommendation of the theological faculty.” The synod did not produce any results, but the king ordered a commission to continue the work of reconciliation.

The doctors of the university required that Hus and his followers approve their conception of the church, according to which the pope is the head, the cardinals are the body of the church, and that all regulations of this church must be obeyed.

Hus protested vigorously against this definition of church, since it made pope and cardinals alone the church, excluding the people. Nevertheless the Hussite party seems to have made a great effort toward reconciliation. To the article that the Roman Church must be obeyed, they added only, “so far as every pious Christian is bound.” Stanislav ze Znojma and Štěpán Páleč protested against this addition and left the convention. The king exiled them, along with two other spokesmen.

Writings of Hus and Wycliffe

Hus’ work on the church (De ecclesia) has been most frequently quoted and admired or criticized. The first ten chapters draw heavily on Wycliffe’s work of the same title, while subsequent chapters are basically an abstract of Wycliffe’s De potentate pape on the power of the pope. Wycliffe had written his book to oppose the common view that the church consisted only of the clergy, and Hus now found himself in a similar condition. He wrote his work at the castle of one of his protectors in Kozí Hrádek (near Austria), and sent it to Prague, where it was publicly read in the Bethlehem Chapel. Stanislav ze Znojma and Páleč replied with treatises of the same title.

In January of 1413, a general council assembled in Rome which condemned the writings of Wycliffe and ordered them to be burned.

Huss’ Religion

Huss wanted to make Christianity more accessible to ordinary people. He wanted people to live lives guided by the Bible, which they should read for themselves. Ordinary people, too, had a right to interpret the scriptures, which was not the preserve of the clergy. He despised the wealth and power of the institutionalized church. He believed in a much simpler life-style than that lived by many clergy. He advocated frequent, even daily communion—and in both kinds. At the time, only priests ate the bread; it was popularly held that lay-people could not be trusted to handle Jesus’ body with sufficient reverence. Against the notion that a sacrament was valid even if the priest who performed it was immoral, he believed that “the efficacy of sacraments depended on the worthiness of ministers” (Christie-Murray, 117). He thought that veneration of monks, saints and of the ritual of the church itself, was a distraction from direct fellowship with God. He criticized the clergy for their wealth and worldliness. Many lived lives of ease and accumulated enormous wealth. Hussite priests would not be allowed “worldy possessions.” Even popes, he taught, need not be obeyed if they placed themselves between the people and their God. God, not priests, absolves us of sin, he said. Thus, the pope had no right to issue or to sell indulgences. What was probably most damning in the eyes of the official church was his contention that “Christ, not Peter (and, by implication, his successors) was the rock upon which the church was built.” Above all, Hus wanted people to access God directly, bypassing the church’s claim to be mediator. He believed in the power of the Holy Spirit and was a profoundly spiritual man.

Council of Constance

To put an end to the papal schism and to take up the long desired reform of the church, a general council was convened for November 1, 1414, at Constance (Konstanz, Germany). The Emperor Sigismund of Luxemburg, brother of Wenceslaus, and heir to the Bohemian crown, was anxious to clear the country from the blemish of heresy. Hus likewise was willing to make an end of all dissensions, and gladly followed the request of Sigismund to go to Constance.

From the sermons that he took along, it is evident that he intended to convert the assembled fathers to his own (i.e., Wycliffe’s) principal doctrines. Sigismund promised him safe-conduct, guaranteeing his safety for the duration of his journey; as a secular ruler he would not have been able to make any guarantees for the safety of Hus in a papal court, a fact that Hus would have been aware of. However, Hus was probably reckoning that a guarantee of safe conduct was also a sign of patronage by the king and that therefore he could rely on royal support during the proceedings.

Imprisonment and preparations for trial

It is unknown whether Hus knew what his fate would be. Black (1911) suggests that he had some premonition that he was going to his death. He ordered all his affairs with a “…presentiment, which he did not conceal, that in all probability he was going to his death.” He assembled testimonies to prove to the council that he held orthodox beliefs. He started on his journey on October 11, 1414; on November 3, 1414, he arrived at Constance, and on the following day the bulletins on the church doors announced that Michal z Německého Brodu would be the opponent of Hus, “the heretic.” On route he had been kindly and enthusiastically received “at almost all the halting places” .

In the beginning Hus was at liberty, living at the house of a widow, but after a few weeks his opponents succeeded in imprisoning him, on the strength of a rumor that he intended to flee. He was first brought into the residence of a canon, and then, on December 8, 1414, into the dungeon of the Dominican monastery. Sigismund was greatly angered, having previously guaranteed safe-conduct, and threatened the prelates with dismissal, but when it was hinted that in such a case the council would be dissolved, he yielded.

On December 4, 1414, the Pope had entrusted a committee of three bishops with a preliminary investigation against him. The witnesses for the prosecution were heard, but Hus was refused an advocate for his defense. His situation became worse after the catastrophe of Antipope John XXIII, who had left Constance to evade the necessity of abdicating. So far Hus had been the captive of the pope and in constant intercourse with his friends, but now he was delivered to the archbishop of Constance and brought to his castle, Gottlieben on the Rhine. Here he remained for seventy-three days, separated from his friends, chained day and night, poorly fed, and tortured by disease.

Trial

On June 5, 1415, he was tried for the first time, and for that purpose was transferred to a Franciscan monastery, where he spent the last weeks of his life.

He acknowledged the writings on the church against Znojma, Páleč, as well as Stanislaus of Znaim as his own, and declared himself willing to recant if his errors should be proven to him.

Hus conceded his veneration of Wycliffe, and said that he could only wish his soul might some time attain unto that place where Wycliffe’s was. On the other hand, he denied having defended Wycliffe’s doctrine of The Lord’s Supper or the forty-five articles; he had only opposed their summary condemnation.

The king admonished him to deliver himself up to the mercy of the council, as he did not desire to protect a heretic. At the last trial, on June 8, 1415, thirty-nine sentences were read to him, twenty-six of which had been excerpted from his book on the church, seven from his treatise against Páleč, and six from that against Stanislav ze Znojma. The danger of some of these doctrines as regards worldly power was explained to the emperor to incite him against Hus.

Hus again declared himself willing to submit if he could be convinced of errors. He desired only a fairer trial and more time to explain the reasons for his views. If his reasons and Bible texts did not suffice, he would be glad to be instructed. This declaration was considered an unconditional surrender, and he was asked to confess:

  1. that he had erred in the theses which he had hitherto maintained;
  2. that he renounced them for the future;
  3. that he recanted them; and
  4. that he declared the opposite of these sentences.

He asked to be exempted from recanting doctrines that he had never taught; others, which the assembly considered erroneous, he was willing to revoke; to act differently would be against his conscience. These words found no favorable reception. After the trial on June 8, several other attempts were made to induce him to recant, but he resisted all of them.

The attitude of Sigismund was due to political considerations—he looked upon the return of Hus to his country as dangerous, and thought the terror of execution might improve the situation. Hus no longer hoped to live, and he may in some way have looked forward to becoming a martyr.

Condemnation and execution

The condemnation took place on July 6, 1415, in the presence of the solemn assembly of the council in the cathedral. Each voting member stood up and delivered his own, moving speech that ended with a vote as to whether Hus should live or die. A sizable minority voted to save Hus’s life, but the majority ruled.

If the beginning of the day could be called solemn, the scene after the voting was one of scuffles and chairs being thrown.

After the performance of High Mass and Liturgy, Hus was led into the church. The Bishop of Lodi, Italy, delivered an oration on the duty of eradicating heresy; then some theses of Hus and Wycliffe and a report of his trial were read. He protested loudly several times, and when his appeal to Christ was rejected as a condemnable heresy, he exclaimed, “O God and Lord, now the council condemns even Your own act and Your own law as heresy, since You Yourself did lay Your cause before Your Father as the just judge, as an example for us, whenever we are sorely oppressed.”

Refusals to recant

An Italian prelate pronounced the sentence of condemnation upon Hus and his writings. Again he protested loudly, saying that even at this hour he did not wish anything but to be convinced from Holy Scripture. He fell upon his knees and asked God with a low voice to forgive all his enemies.

Then followed his degradation—he was enrobed in priestly vestments and again asked to recant; again he refused. With curses his ornaments were taken from him, his priestly tonsure was destroyed, and the sentence was pronounced that the church had deprived him of all rights and delivered him to the secular powers. Then a high paper hat was put upon his head, with the inscription “Haeresiarcha” (meaning the leader of a heretical movement). Hus was led away to the stake under a strong guard of armed men.

At the place of execution he knelt down, spread out his hands, and prayed aloud. Some of the people asked that a confessor should be given him, but one priest exclaimed that a heretic should neither be heard nor given a confessor. The executioners undressed Hus and tied his hands behind his back with ropes, and his neck with a chain to a stake around which wood and straw had been piled up so that it covered him to the neck.

At the last moment, the imperial marshal, Von Pappenheim, in the presence of the Count Palatine, asked him to recant and thus save his life, but Hus declined with the words, “God is my witness that I have never taught that of which I have by false witnesses been accused. In the truth of the Gospel which I have written, taught, and preached, I will die today with gladness.”

Burning at the stake

As the fire was kindled, Hus sang, “Christ, Son of the living God, have mercy upon me.” When he started this for a third time and continued “…who is born of Mary the Virgin,” the wind blew the flame into his face; he still moved lips and head, and then died of suffocation. His clothes were thrown into the fire, his ashes gathered and cast into the nearby Rhine. Some sources report him as saying “O sancta simplicitas!” (“Oh holy simplicity!”) when he stood upon the stake and saw a woman adding more wood to it.

On December 18, 1999, Pope John Paul II apologized for the execution of Jan Hus.

Source of his influence

The great success of Hus in his native country was due mainly to his unsurpassed pastoral activity, which far excelled that of the famous old preachers of Bohemia. Hus himself put the highest value on the sermon and knew how to awaken the enthusiasm of the masses. His sermons were often inflammatory as regards their content; he introduces his quarrels with his spiritual superiors, criticizes contemporaneous events, or appeals to his congregation as witness or judge. It was this bearing that multiplied his adherents, and thus he became the true apostle of his English master without being himself a theorist in theological questions.

Other historians would attribute his success to his and his listeners’ deep belief in the holy word and the corruption of the Catholic Church. During Hus’s trial, he never made claims to originality, but instead advocated a return to the word of the Bible. He continued to repeat that if it could be shown in the Bible that he had erred, that he would gladly recant and be corrected. His single-minded pursuit of the truth was liberating to Europe and was perhaps his greatest legacy.

Hus’ friend and devoted follower, Jerome of Prague, shared his fate, although he did not suffer death till nearly a year later, in 1416.

Legacy

The Hussites continued to practice his teachings. They administered communion regularly, preached and read the Bible in the vernacular, denied priests’ any worldly possessions and increasingly disliked images, observance of festivals and tended towards a ‘memorial’ understanding of communion, similar to Ulrich Zwingli’s (Christie-Murray, 120). They held that the Bible contains all Christian teaching, thus the councils and the creeds are not binding. After the seventeenth century, many Hussites joined other Protestant churches such as the Lutheran and Moravian churches. The movement had two branches, the Ultraquists and the Unitas Fratrum (or Bohemian Brethren).

The first group reached a compromise with the Catholic Church, allowing them to practice differently from other Catholics but under the church’s authority. This followed their popular uprising against King Sigismund (1368-1437) and a series of military confrontations in which they proved themselves difficult to defeat by military means. Count Lutzow (1911) suggests that the democratic character of the Hussite movement was itself feared by their princely opponents, “who were afraid that such views might extend to their own countries,” so instead they sued for peace. A formal compact was signed on July 5, 1436, allowing the Hussites to give the sacrament freely in both kinds, to preach freely, and affirming that their priests would “claim no ownership of worldly possessions”. When Sigismund regained power he tried to rescind this but was not able to do so. A crusade had been proclaimed against the Hussites in 1420. In 1430 Joan of Arc wrote a letter to the Hussites, threatening to wage war on them unless they returned to the Catholic Church.

The second group can be regarded as the spiritual heir of the Hussites, originating in Bohemia as the Unitas Fratrum (or the Bohemian Brethren), a group that on the one hand maintained the historic episcopacy while on the other hand following Huss’ teaching. They especially stressed pacifism as a Christian virtue. Under the reign of the Hissites gained legal status, alongside Catholics. Their basic beliefs were set out in the Four Article of Prague (1420):

1.      Freedom to preach the Word of God.

2.      Celebration of the Lord’s Supper in both kinds (bread and wine to priests and laity alike).

3.      No profane power for the clergy.

4.      The same law for laity and priests (see Christie-Murray, 119).

In 1620, after the Thirty Years’ War, members were forced to accept Roman Catholic authority or to flee from all parts of the Holy Roman Empire, including Bohemia. Some settled in Protestant parts of Germany, where the movement was reorganized as the Moravian Church by Count Nicolaus Ludwig Zinzendorf (1700-1760). The Moravians stress personal, inner piety, Christian unity, overseas missions and self-reliance (all missionaries support themselves with a trade). Moravians are in full communion with Lutherans and many regard the “church” to be an “order” within the Lutheran fellowship, which is how John Wesley, who was influenced by the Moravians, originally saw his Methodists. See  on the Moravians.

Some critics say that Hus’ work was mainly borrowed from Wycliffe but Black (1911) comments that his Super IV Sententiarum proves that he was a “…man of profound learning.” However, concludes Black, Hus’ “principal glory will always be founded on his spirituality [whose] honour of having been one of the bravest of the martyrs [that died for the] cause of honesty and freedom…[and he] handed on from Wycliffe to Luther the torch which kindled the reformation”.

Hus is honored in the Czech Republic on July 6, known as Jan Hus Day (Den upálení mistra Jana Husa) the anniversary of his execution.

 

Major-General Orde Charles Wingate


Major-General Orde Charles Wingate, (February 26, 1903 – March 24, 1944), was a decorated and at times controversial British Army officer and creator of special military units in World War II and Palestine in the 1930s. In 1942 he formed the Chindits, the special forces that penetrated behind Japanese lines in Burma, pioneering the use of air and radio support of troops deep within enemy territory. He has been described as the father of modern guerrilla warfare, although he preferred to see his forces as countering guerrilla action rather than as engaged in this type of warfare. He has also been called father of the Israeli Defense Force. In Israel, he is remembered as “Ha-yedid” (the friend). Less popular with his superiors than with his men, he inspired the loyalty and admiration of the latter.

Perhaps the most important aspect of Wingate’s legacy is that his career raised some moral issues that remain of concern in situations involving unconventional warfare. For example, when regular soldiers respond to acts of terror or attacks committed by people who are not members of the official armed forces of a recognized nation-state, what rules of combat apply? The post September 11 2001 “war on terror” raised similar concerns relating to the status of prisoners, how they should be treated, held accountable or put on trial for any alleged war crimes. A man of deep Christian faith, Wingate saw war as a necessary evil. He did not glory in war. He knew that unless fought for a just cause and to defeat evil, war becomes an unnecessary evil. He gave his life in his nation’s service when his plane crashed in Burma in 1944.

Childhood and education

Wingate was born February 26, 1903 in Naini Tal, India to a military family. His father had become a committed member of the Plymouth Brethren early in his army career in India, and at the age of 46 married Mary Ethel Orde-Brown, oldest daughter of a family who were also Plymouth Brethren (after wooing her for 20 years). His father reached retirement from the army two years after Wingate was born and he spent most of his childhood in England where he received a very religious upbringing and was introduced to Christian Zionist ideas at a very young age. It was not uncommon for the young Wingate to be subjected to long days of reading and memorizing the Old Testament.

Beside a strict religious upbringing Wingate was also subjected, by his father, to a harsh and Spartan regimen, living with a daily consciousness of hell-fire and eternal damnation. Because of their parents’ strict beliefs the family of seven children were kept away from other children and from the influence of the outside world. Until he was 12 years old, Orde had hardly ever mixed with children of his own age.

In 1916, his family having moved to Godalming, Wingate attended Charterhouse School as a day boy. Because he did not board at the school and took no part in sports, he became increasingly separate and isolated, so that he missed out on many of the aspects of a public school (independent school) education of the period. At home, lazing about and idling were forbidden, and the children were always given challenging objectives to encourage independent thought, initiative and self reliance.

Early army career

After four years Wingate left Charterhouse and in 1921 he was accepted into the Royal Military Academy at Woolwich, the Royal Artillery’s officers’ training school. For committing a minor offense against the rules a first year student would be subjected to a ragging ritual named “running.” This ritual consisted of the first-year being stripped and forced to run a gauntlet of senior students all of whom wielded a knotted towel which they used to hit the accused on his journey along the line. On reaching the end the first-year would then be thrown into an icy cold cistern of water. When it came time for Wingate to run the gauntlet, for allegedly having returned a horse to the stables too late, he walked to the senior student at the head of the gauntlet, stared at him and dared him to strike. The senior refused. Wingate, moved to the next senior and did the same, he too refused. In turn each senior declined to strike and coming to the end of the line Wingate walked to the cistern and dived straight into the icy cold water.

In 1923 Wingate received his gunnery officer’s commission and was posted to the 5th Medium Brigade at Larkhill on Salisbury Plain. During this period he was able to exercise his great interest in horse-riding, gaining a reputation for his skill (and success) in point-to-point races and during fox-hunting, particularly for finding suitable places to cross rivers which earned him the nickname “Otter”. It was difficult in the 1920s for an army officer to live on his pay and Wingate, living life to the full, also gained a reputation as a late payer of his bills. In 1926, because of his prowess in riding, Wingate was posted to the Military School of Equitation where he excelled much to the chagrin of the majority of the cavalry officers at the centre who found him insufferable – frequently challenging the instructors in a demonstration of his rebellious nature.

Sudan, 1928–1933

Wingate’s father’s “Cousin Rex”, Sir Reginald Wingate, a retired army general who had been Governor-General of Sudan between 1899 and 1916 and High Commissioner of Egypt from 1917 to 1919, had a considerable influence over Wingate’s career at this time. He gave him a positive interest in Middle East affairs and in Arabic. As a result Wingate successfully applied to take a course in Arabic at the School of Oriental Studies in London and passed out of the course, which lasted from October 1926 to March 1927, with a mark of 85 percent.

In June 1927, with Cousin Rex’s encouragement, Wingate obtained six months leave in order to mount an expedition in the Sudan. Rex had suggested that he travel via Cairo and then try to obtain secondment to the Sudan Defence Force Sending his luggage ahead of him, Wingate set off in September 1927 by bicycle, traveling first through France and Germany before making his way to Genoa via Czechoslovakia, Austria and Yugoslavia. Here he took a boat to Egypt. From Cairo he traveled to Khartoum. In April 1928 his application to transfer to the Sudan Defence Force came through and he was posted to The East Arab Corps, serving in the area of Roseires and Gallabat on the borders of Ethiopia where the SDF patrolled to catch slave traders and ivory poachers. He changed the method of regular patrolling to ambushes.

In March 1930 Wingate was given command of a company of 300 soldiers with the local rank of Bimbashi (major). He was never happier than when in the bush with his unit but when at HQ in Khartoum he antagonized the other officers with his aggressive and argumentative personality.

At the end of his tour, Wingate mounted a short expedition into the Libyan Desert to investigate the lost army of Cambyses, mentioned in the writings of Herodotus, and to search for the lost oasis of Zerzura. Supported by equipment from the Royal Geographical Society (the findings of the expedition were published on the Royal Geographical Magazine in April 1934) and the Sudan Survey Department, the expedition set off in January 1933. Although they did not find the oasis, Wingate saw the expedition as an opportunity to test his endurance in a very harsh physical environment and also his organizational and leadership abilities.

Return to the UK, 1933

On his return to the UK in 1933, Wingate was posted to Bulford on Salisbury Plain and was heavily involved in retraining, as British artillery units were being mechanized. On the sea journey home from Egypt he had met Lorna Moncrieff Patterson, who was 16 years old and traveling with her mother. They were married two years later, on January, 24 1935.

Palestine and the Special Night Squads

In 1936 Wingate was assigned to the British Mandate of Palestine to a staff office position and became an intelligence officer. From his arrival, he saw the creation of a Jewish State in Palestine as being a religious duty toward the literal fulfillment of prophecy and he immediately put himself into absolute alliance with Jewish political leaders. He believed that Britain had a providential role to play in this process. Wingate learned Hebrew.

Arab guerrillas had at the time of his arrival begun a campaign of attacks against both British mandate officials and Jewish communities, which became known as the Arab Revolt.

Wingate became politically involved with a number of Zionist leaders, eventually becoming an ardent supporter of Zionism, despite the fact that he was not Jewish. He formulated the idea of raising small assault units of British-led Jewish commandos, heavily armed with grenades and light infantry small arms, to combat the Arab uprising, and took his idea personally to Archibald Wavell, who was then a commander of British forces in Palestine. After Wavell gave his permission, Wingate convinced the Zionist Jewish Agency and the leadership of Haganah, the Jewish armed group.

In June 1938 the new British commander, General Haining, gave his permission to create the Special Night Squads, armed groups formed of British and Haganah volunteers. This is the first instance of the British recognizing Haganah’s legitimacy as a Jewish defense force. The Jewish Agency helped pay salaries and other costs of the Haganah personnel.

Wingate trained, commanded and accompanied them in their patrols. The units frequently ambushed Arab saboteurs who attacked oil pipelines of the Iraq Petroleum Company, raiding border villages the attackers had used as bases. In these raids, Wingate’s men sometimes imposed severe collective punishments on the village inhabitants that were criticized by Zionist leaders as well as Wingate’s British superiors. But the tactics proved effective in quelling the uprising, and Wingate was awarded the DSO in 1938.

However, his deepening direct political involvement with the Zionist cause and an incident where he spoke publicly in favor of formation of a Jewish state during his leave in Britain, caused his superiors in Palestine to remove him from command. He was so deeply associated with political causes in Palestine that his superiors considered him compromised as an intelligence officer in the country. He was promoting his own agenda rather than that of the army or the government.

In May 1939, he was transferred back to Britain. Wingate became a hero of the Yishuv (the Jewish Community), and was loved by leaders such as Zvi Brenner and Moshe Dayan who had trained under him, and who claimed that Wingate had “taught us everything we know.” He dreamed, says Oren, “of one day commanding the first Jewish army in two thousand years and of leasing the fight to establish an independent Jewish state.”

Wingate’s political attitudes toward Zionism were heavily influenced by his Plymouth Brethren religious views and belief in certain eschatological doctrines.

Ethiopia and the Gideon Force

At the outbreak of World War II, Wingate was the commander of an anti-aircraft unit in Britain. He repeatedly made proposals to the army and government for the creation of a Jewish army in Palestine which would rule over the area and its Arab population in the name of the British. Eventually his friend Wavell, by this time Commander-in-Chief of Middle East Command which was based in Cairo, invited him to Sudan to begin operations against Italian occupation forces in Ethiopia. Under William Platt, the British commander in Sudan, he created the Gideon Force, a guerrilla force composed of British, Sudanese and Ethiopian soldiers. The force was named after the biblical judge Gideon, who defeated a large force with a tiny band. Wingate invited a number of veterans of the Haganah SNS to join him. With the blessing of the Ethiopian king, Haile Selassie, the group began to operate in February 1941. Wingate was temporarily promoted to lieutenant colonel and put in command. He again insisted on leading from the front and accompanied his troops. The Gideon Force, with the aid of local resistance fighters, harassed Italian forts and their supply lines while the regular army took on the main forces of the Italian army. The small Gideon Force of no more than 1,700 men took the surrender of about 20,000 Italians toward the end of the campaign. At the end of the fighting, Wingate and the men of the Gideon Force linked with the force of Lieutenant-General Alan Cunningham which had advanced from Kenya to the south and accompanied the emperor in his triumphant return to Addis Ababa in May. Wingate was mentioned in dispatches in April 1941 and was awarded a second DSO in December.

With the end of the East African Campaign, on June 4, 1941, Wingate was removed from command of the now-dismantled Gideon Force and his rank was reduced to that of major. During the campaign he was irritated that British authorities ignored his request for decorations for his men and obstructed his efforts to obtain back pay and other compensation for them. He left for Cairo and wrote an official report extremely critical of his commanders, fellow officers, government officials and many others. Wingate was also angry that his efforts had not been praised by authorities, and that he had been forced to leave Abyssinia without having said farewell to Emperor Selassie. Wingate was most concerned about British attempts to stifle Ethiopian freedom, writing that attempts to raise future rebellions amongst populations must be honest ones and should appeal to justice. Soon after, he contracted malaria. He sought treatment from a local doctor instead of army doctors because he was afraid that the illness would give his detractors another further excuse to undermine him. This doctor gave him a large supply of the drug Atabrine, which can produce as a side-effect depression if taken in high dosages. Already depressed over the official response to his Abyssinian command, and sick with malaria, Wingate attempted suicide by stabbing himself in the neck.

Wingate was sent to Britain to recuperate. A highly edited version of his report was passed through Wingate’s political supporters in London to Winston Churchill. Consequent to this Leo Amery, the Secretary of State for India contacted Wavell, now Commander-in-Chief in India commanding the South-East Asian Theatre to enquire if there was any chance of employing Wingate in the Far East. On February 27, 1941 Wingate, far from pleased with his posting as a “supernumary major without staff grading” left Britain for Rangoon.

Burma

Chindits and the First Long-Range Jungle Penetration Mission

On Wingate’s arrival in March 1942 in the Far East he was appointed colonel once more by General Wavell, and was ordered to organize counter-guerrilla units to fight behind Japanese lines. However, the precipitous collapse of Allied defenses in Burma forestalled further planning, and Wingate flew back to India in April, where he began to promote his ideas for jungle long-range penetration units.

Intrigued by Wingate’s theories, General Wavell gave Wingate a brigade of troops, the (Indian 77th Infantry Brigade), from which he created 77 Brigade, which was eventually named the Chindits, a corrupted version of the name of a mythical Burmese lion, the chinthe. By August 1942 he had set up a training center near Gwalior and attempted to toughen up the men by having them camp in the Indian jungle during the rainy season. This proved disastrous, as the result was a very high sick rate among the men. In one battalion 70 percent of the men went absent from duty due to illness, while a Gurkha battalion was reduced from 750 men to 500. Many of the men were replaced in September 1942 by new drafts of personnel from elsewhere in the army.

Meanwhile, his direct manner of dealing with fellow officers and superiors along with eccentric personal habits won him few friends among the officer corps; he would consume raw onions because he thought they were healthy, scrub himself with a rubber brush instead of bathing, and greet guests to his tent while completely naked. However, Wavell’s political connections in Britain and the patronage of General Wavell (who had admired his work in the Abyssinian campaign) protected him from closer scrutiny.

The original 1943 Chindit operation was supposed to be a coordinated plan with the field army. When the offensive into Burma by the rest of the army was cancelled, Wingate persuaded Wavell to be allowed to proceed into Burma anyway, arguing the need to disrupt any Japanese attack on Sumprabum as well as to gauge the utility of long-range jungle penetration operations. Wavell eventually gave his consent to Operation Longcloth.

Wingate set out from Imphal on February 12 1943 with the Chindits organized into eight separate columns to cross the Chindwin river. The force met with initial success in putting one of the main railways in Burma out of action. But afterward, Wingate led his force deep into Burma and then over the Irrawaddy River. Once the Chindits had crossed over the river, they found conditions very different to that suggested by intelligence they had received. The area was dry and inhospitable, criss-crossed by motor roads which the Japanese were able to use to good effect, particularly in interdicting supply drops to the Chindits who soon began to suffer severely from exhaustion, and shortages of water and food. On March 22 Eastern Army HQ ordered Wingate to withdraw his units back to India. Wingate and his senior commanders considered a number of options to achieve this but all were threatened by the fact that with no major army offensive in progress, the Japanese would be able to focus their attention on destroying the Chindit force. Eventually they agreed to retrace their steps to the Irrawaddy, since the Japanese would not expect this, and then disperse to make attacks on the enemy as they returned to the Chindwin.

By mid-March, the Japanese had three infantry divisions chasing the Chindits, who were eventually trapped inside the bend of the Shweli River by Japanese forces. Unable to cross the river intact and still reach British lines, the Chindit force was forced to split into small groups to evade enemy forces. The latter paid great attention to preventing air resupply of Chindit columns, as well as hindering their mobility by removing boats from the Irrawaddy, Chindwin, and Mu rivers and actively patrolling the river banks. Continually harassed by the Japanese, the force returned to India by various routes during the spring of 1943 in groups ranging from single individuals to whole columns: some directly, others via a roundabout route from China. Casualties were high, and the force lost approximately one-third of its total strength.

When men were injured, Wingate would leave them “beside the trail” with water, ammunition and a Bible and “often, before the departing troops were out of earshot, they heard the explosion of gunshots from the place where they had left the wounded, who had chosen not to wait for Japanese troops to arrive.” His men, however, were deeply loyal.

After-Battle analysis

With the losses incurred during the first long-range jungle penetration operation, many officers in the British and Indian army questioned the overall value of the Chindits. The campaign had the unintended effect of convincing the Japanese that certain sections of the Burma/India Frontier were not as impassable as they previously believed, thus altering their strategic plans. As one consequence, the overall Japanese Army commander in Burma, General Masakazu Kawabe, began planning a 1944 offensive into India to capture the Imphal Plain and Kohima, in order to better defend Burma from future Allied offensives.

However, in London the Chindits and their exploits were viewed as a success after the long string of Allied disasters in the Far East theater. Winston Churchill, an ardent proponent of commando operations, was in particular complimentary towards the Chindits and their accomplishments. Afterwards, the Japanese admitted that the Chindits had completely disrupted their plans for the first half of 1943. As a propaganda tool, the Chindit operation was used to prove to the army and those at home that the Japanese could be beaten and that British/Indian Troops could successfully operate in the jungle against experienced Japanese forces. On his return, Wingate wrote an operations report, in which he again was highly critical of the army and even some of his own officers and men. He also promoted more unorthodox ideas, for example that British soldiers had become weak by having too easy access to doctors in civilian life. The report was again passed through back-channels by Wingate’s political friends in London directly to Churchill. Churchill then invited Wingate to London. Soon after Wingate arrived, Churchill decided to take him and his wife along to the Quebec Conference. Chief of the Imperial General Staff, Alan Brooke Alanbrooke was astonished at this decision. In his War Diaries Alanbrooke wrote after his interview with Wingate in London on August 4:

“I was very interested in meeting Wingate…. I considered that the results of his form of attacks were certainly worth backing within reason…. I provided him with all the contacts in England to obtain what he wanted, and told him that on my return from Canada I would go into the whole matter with him…[later] to my astonishment I was informed that Winston was taking Wingate and his wife with him to Canada! It could only be as a museum piece to impress the Americans! There was no other reason to justify this move. It was sheer loss of time for Wingate and the work he had to do in England.”

There, Wingate explained his ideas of deep penetration warfare to the Combined Chiefs of Staff meeting on August 17. Brooke wrote on August 17: “Quite a good meeting at which I produced Wingate who gave a first class talk of his ideas and of his views on the running of the Burma campaign” Air power and radio, recent developments in warfare, would allow units to establish bases deep in enemy territory, breaching the outer defenses and extend the range of conventional forces. The leaders were impressed, and larger scale deep penetration attacks were approved.

Second long-range jungle penetration mission

On his return from his meeting with Allied leaders, Wingate had contracted typhoid by drinking bad water on his way back to India. His illness prevented him from taking a more active role in training of the new long-range jungle forces.

Once back in India, Wingate was promoted to acting major general, and was given six brigades. At first, Wingate proposed to convert the entire front into one giant Chindit mission by breaking up the entire 14th Army into Long-Range Penetration units, presumably in the expectation that the Japanese would follow them around the Burmese jungle in an effort to wipe them out. This plan was hurriedly dropped after other commanders pointed out that the Japanese Army would simply advance and seize the forward operating bases of Chindit forces, requiring a defensive battle and substantial troops that the Indian Army would be unable to provide.

In the end, a new long-range jungle penetration operation was planned, this time using all six of the brigades recently allocated to Wingate. This included 111 Brigade, a recently-formed unit known as the Leopards. While Wingate was still in Burma, General Wavell had ordered the formation of 111 Brigade along the lines of the 77 Brigade Chindits, selecting General Joe Lentaigne as the new Commander. 111 Brigade would later be joined by 77 Brigade Chindits in parallel operations once the latter had recovered from prior combat losses.

The second Long-Range Penetration mission was originally intended as a coordinated effort with a planned regular army offensive against northern Burma, but events on the ground resulted in cancellation of the army offensive, leaving the Long-Range Penetration Groups without a means of transporting all six brigades into Burma. Upon Wingate’s return to India, he found that his mission had also been canceled for lack of air transport. Wingate took the news bitterly, voicing disappointment to all who would listen, including Allied commanders such as Colonel Philip Cochran of the 1st Air Commando Group, which proved to be a blessing in disguise. Cochran told Wingate that canceling the long-range mission was unnecessary; only a limited amount of plane transport would be needed since, in addition to the light planes and C-47 Dakotas Wingate had counted on, Cochran explained that 1st Air Commando had 150 gliders to haul supplies: Wingate’s dark eyes widened as Phil explained that the gliders could also move a sizable force of troops. The general immediately spread a map on the floor and planned how his Chindits, airlifted deep into the jungle, could fan out from there and fight the Japanese.

With his new glider landing option, Wingate decided to proceed into Burma anyway. The character of the 1944 operations were totally different to those of 1943. The new operations would establish fortified bases in Burma out of which the Chindits would conduct offensive patrol and blocking operations. A similar strategy would be used by the French in Indochina years later at Dien Bien Phu.

On March 6, 1944, the new long-range jungle penetration brigades, now collectively referred to as Chindits, began arriving in Burma by glider and parachute, establishing base areas and drop zones behind Japanese lines. By fortunate timing, the Japanese launched an invasion of India around the same time. By forcing several pitched battles along their line of march, the Chindit columns were able to disrupt the Japanese offensive, diverting troops from the battles in India.

Death

On March 24, 1944 Wingate flew to assess the situations in three Chindit-held bases in Burma. On his return, flying from Imphal to Lalaghat, the US B-25 Mitchell plane in which he was flying crashed into jungle-covered hills near Bishenpur (Bishnupur), in the present-day state of Manipur in Northeast India, where he died alongside nine others. General Joe Lentaigne was appointed to overall command of LRP forces in place of Wingate; he flew out of Burma to assume command as Japanese forces began their assault on Imphal. Command of 111 Brigade in Burma was assigned to Lt. Col. ‘Jumbo’ Morris, and Brigade Major John Masters.

Eccentricities

Wingate was known for various eccentricities. For instance, he often wore an alarm clock around his wrist, which would go off at times, and a raw onion on a string around his neck, which he would occasionally bite into as a snack. He often went about without clothing. In Palestine, recruits were used to having him come out of the shower to give them orders, wearing nothing but a shower cap, and continuing to scrub himself with a shower brush. Lord Moran, Winston Churchill’s personal physician wrote in his diaries that “[Wingate] seemed to me hardly sane – in medical jargon a borderline case.” He always carried a Bible.

Commemoration

Orde Wingate was originally buried at the site of the air crash in the Naga Hills in 1944. In April 1947, his remains, and those of other victims of the crash, were moved to the British Military Cemetery in Imphal, India. In November 1950, all the remains were reinterred at Arlington National Cemetery, Virginia in keeping with the custom of repatriating remains in mass graves to the country of origin of the majority of the soldiers.

A memorial to Orde Wingate and the Chindits stands on the north side of the Victoria Embankment, near Ministry of Defence headquarters in London. The facade commemorates the Chindits and the four men awarded the Victoria Cross. The battalions that took part are listed on the sides, with non-infantry units mentioned by their parent formations. The rear of the monument is dedicated to Orde Wingate, and also mentions his contributions to the state of Israel.

To commemorate Wingate’s great assistance to the Zionist cause, Israel’s National Centre for Physical Education and Sport, the Wingate Institute (Machon Wingate) was named after him. A square in the Rehavia neighborhood of Jerusalem, Wingate Square (Kikar Wingate), also bears his name, as does the Yemin Orde youth village near Haifa. A Jewish football club formed in London in 1946, Wingate F.C. was also named in his honor.

A memorial stone in his honor stands in Charlton Cemetery, London SE7, where other members of the Orde Browne family are buried.

Family

Orde Wingate’s son, Orde Jonathan Wingate, joined the Honourable Artillery Company and rose through the ranks to become the regiment’s Commanding Officer and later Regimental Colonel. He died in 2000 at the age of 56, and is survived by his wife and two daughters. Other members of the Wingate family live around England.

Legacy

Wingate is credited as having developed modern guerrilla warfare tactics. He used radio and air transport to coordinate his small, highly mobile special units, which he believed could operate for twelve weeks at a time. Davison writes that he was responsible for “important tactical innovations” including “techniques of irregular warfare and effective use of air support in tropical terrain.” The Chindits relied on air drops for their supplies. Mead remarks that he is generally acknowledged to have perfected the technique of “maintaining troops without a land line of communication.” Mead argues that the official account of World War II is biased against Wingate due to personal animosity between Slim and Wingate, who thought he was too ambitious and obsessed with his own theory that behind-the-lines action was the best strategy to defeat the Japanese. On the one hand, he was “a complex man – difficult, intelligent, ruthless and prone to severe depression.” On the other hand, his “military legacy” is “relevant to any military students today.”Critics of his campaign in Palestine argue that he blurred the distinction between military personnel and civilians, although he always “stressed that squads should not mistreat … prisoners or civilians.” The problem was that the gangs he was fighting against received assistance from civilians. In Israel, he is remembered as “Ha-yedid” (the friend) and considered by some to be the father of the Israeli defense force. He is remembered as “a heroic, larger than life figure to whom the Jewish people” owe “a deep and enduring debt.” Oren comments that for every books praising Wingate there is another one that assails him as a “egotist, an eccentric” and “even a madman” Some accuse him of having employed “terror against terror.”

Perhaps the most important aspect of Wingate’s legacy is that many of the moral issues raised by his career remain of concern in situations involving unconventional warfare. For example, when regular soldiers respond to acts of terror or attacks committed by people who are not members of the official armed forces of a recognized nation-state what rules of combat apply? In the continued conflict between the State of Israel, which Wingate did not live to see established, and members of various para-military groups, these issues remain center stage. Some, such as Moreman, argue that the Chindits were significant mainly in boosting morale not strategically. Others, including Rooney and Dunlop, suggest that they made an important contribution towards the July 1944 defeat of the Japanese in Burma, weakening their position in the jungle. As early as 1945, the Chindits were being studied in military training schools. After his death, Wavell compared Wingate with T. E. Lawrence although stressed that the former was more professional. Slim described him as possessing “sparks of genius” and said that he was among the few men in the war who were “irreplaceable.” Others have commented on his “supremacy both in planning, training and as a leader.” Mead remarks that “there is no evidence that Wingate had personal ambitions”. Rather, the appears to have wanted to serve his nation to the best of his ability by using his expertise in irregular combat where it could be the most effective. He saw war as a “necessary evil” When asked by the future Israeli Foreign Secretary what he meant when he called one man bad and another good, he replied, “I mean he is one who lives to fulfill the purposes of God.” To Orde Wingate, “good and evil, and the constant struggle between light and darkness in the world and in the heart of man, were … real” and he took this conviction with him into war. At the very least, this suggests that Wingate thought deeply about the morality of war. As the first Chindit expedition left, he concluded his order with “Let us pray God may accept our services and direct our endeavors so that when we shall have done all, we shall see the fruit of our labors and be satisfied.” He sometimes cited the Bible in his military communiqués.

 

Cowboy


A cowboy is an animal herder, usually in charge of the horses and/or cattle, on cattle ranches, especially in the western United States and Canada. The cowboy tradition began in Spain and was subsequently transported into North and South America, where it developed its unique and enduring character. Cowboys were an essential part of the nineteenth century American West, hired to keep a watchful eye over the large roving herds of cattle on the open range.

Today, in addition to ranch work, some cowboys work in and participate in rodeos, while some work exclusively in the rodeo. Cowboys also spawned a rich cultural tradition, made famous throughout the world through Western novels, songs, movies, and serial programs on radio and television.

Etymology

The word “cowboy” first appeared in the English language about 1715–25 C.E. It appears to be a direct English translation of vaquero, the Spanish term for an individual who managed cattle while mounted on horseback, derived from vaca, meaning “cow.” Another English word for a cowboy, buckaroo, is an Anglicization of vaquero.

A main difference between “vaquero” and “cowboy” is that the Spanish term lacks an implication of youth. Because of the time and physical ability needed to develop necessary skills, the American cow “boy” often began his career as an adolescent, earning wages as soon as he had enough skill to be hired, often as young as 12 or 13. In the United States, a few women also took on the tasks of ranching and learned the necessary skills, though the “cowgirl” did not become widely recognized or acknowledged until the close of the nineteenth century.

History

The Spanish cowboy tradition developed with the hacienda system of medieval Spain. This style of cattle ranching spread throughout much of the Iberian peninsula and was later exported to the Americas. Both regions possessed a dry climate with sparse grass, and thus large herds of cattle required vast amounts of land in order to obtain sufficient forage. The need to cover distances greater than a person on foot could manage gave rise to the development of the horseback-mounted vaquero.

During the sixteenth century, Spanish settlers brought their cattle-raising traditions as well as their horses and cattle to the Americas, starting with their arrival in what today is Mexico and Florida. The traditions of Spain were transformed by the geographic, environmental and cultural circumstances of New Spain, which later became Mexico and the southwestern United States.

The tradition evolved further, particularly in the central states of Mexico—Jalisco and Michoacán—where the Mexican cowboy would eventually be known as a “charro,” as well as areas to the north that later became the Southwestern United States. Most of these vaqueros were men of mestizo and Native American origin, while most of the hacendados (owners) were ethnically Spanish.

 

As English-speaking traders and settlers moved into the Western United States, English and Spanish traditions and culture merged to some degree, with the vaquero tradition providing the foundation of the American cowboy. Before the Mexican American War in 1848, New England merchants who traveled by ship to California encountered both hacendados and vaqueros, trading manufactured goods for the hides and tallow produced from vast cattle ranches. American traders along with what later became known as the Santa Fe Trail had similar contacts with vaquero life. Starting with these early encounters, the lifestyle and language of the vaquero began a transformation which merged with English cultural traditions and produced what became known in American culture as the “cowboy.”

By the 1890s, railroads had expanded to cover most of the nation, making long cattle drives from Texas to the railheads in Kansas unnecessary. The invention of barbed wire allowed cattle to be confined to designated acreage to prevent overgrazing of the range, which had resulted in widespread starvation, particularly during the harsh winter of 1886-1887. Hence, the age of the open range was gone and large cattle drives were over. Smaller cattle drives continued at least into the 1940s, as ranchers, prior to the development of the modern cattle truck, still needed to herd cattle to local railheads for transport to stockyards and packing plants.

Ethnicity of the traditional cowboy

Cowboys ranked low in the social structure of the period, and there are no firm figures as to their ethnicity. Anglos, Mexicans, Native Americans, freed Negro slaves, and men of mixed blood were certainly among them.

Texas produced the greatest number of white cowboys, probably accounting for the plurality. It is estimated that about 15 percent of cowboys were of African-American ancestry. Similarly, U.S. cowboys of Mexican descent also averaged about 15 percent, but were more common in Texas and the southwest. (In Mexico, the vaqueros developed a distinct tradition and became known as charros.) Many early vaqueros were Native American people trained to work for the Spanish missions in caring for the mission herds. Later, particularly after 1890, when American policy promoted “assimilation” of Indians, some Indian boarding schools also taught ranching skills to native youth. Today, some Native Americans in the western United States own cattle and small ranches, and many are still employed as cowboys, especially on ranches located near Indian Reservations. The “Indian Cowboy” also became a commonplace sight on the rodeo circuit.

U.S cowboy traditions

Geographic and cultural factors caused differences to develop in cattle-handling methods and equipment from one part of the United States to another. In the modern world, remnants of two major and distinct cowboy traditions remain, known today as the “Texas” tradition and the “California” tradition, which is more closely related to its Spanish roots. Less well-known but equally distinct traditions developed in Hawaii and Florida.

Texas

In the early 1800s, the Spanish Crown, and later, independent Mexico, offered empresario grants in what would become Texas to non-citizens, such as settlers from the United States. In 1821, Stephen F. Austin and his East Coast comrades became the first Anglo-Saxon community speaking Spanish. Following Texas independence in 1836, even more Americans immigrated into the empresario ranching areas of Texas. Here the settlers were strongly influenced by the Mexican vaquero culture, borrowing vocabulary and attire from their counterparts, but also retaining some of the livestock-handling traditions and culture of the Eastern United States and Great Britain. The Texas cowboy was typically a bachelor who hired on with different outfits from season to season.

Following the American Civil War, vaquero culture diffused eastward and northward, combining with the cow herding traditions of the eastern United States that evolved as settlers moved west. Other influences developed out of Texas as cattle trails were created to meet up with the railroad lines of Kansas and Nebraska, in addition to expanding ranching opportunities in the Great Plains and Rocky Mountain Front, east of the Continental Divide.

The Texas cowboy tradition therefore arose from a combination of cultural influences and the need to conduct long cattle drives to get animals to market under often treacherous environmental conditions.

California

The vaquero, the Spanish or Mexican cowboy who worked with young, untrained horses, had flourished in California and bordering territories during the Spanish Colonial period. Settlers from the United States did not enter California until after the Mexican War, and most early settlers were miners rather than livestock ranchers, leaving livestock-raising largely to the Spanish and Mexican people who chose to remain in California. The California vaquero, or buckaroo, unlike the Texas cowboy, was considered a highly-skilled worker, who usually stayed on the same ranch where he was born or had grown up and raised his own family there.

Florida cowhunters

The Florida “cowhunter” or “cracker cowboy” of the nineteenth and early twentieth centuries was distinct from the Texas and California traditions. Florida cowboys did not use lassos to herd or capture cattle. Their primary tools were bullwhips and dogs. Florida cattle and horses were small. The “cracker cow”—also known as the “native cow” or “scrub cow”—averaged about 600 pounds and had large horns and feet. Since the Florida cowhunter did not need a saddle horn for anchoring a lariat, many did not use Western saddles. They usually wore inexpensive wool or straw hats, and used ponchos for protection from rain.

Hawaiian Paniolo

The Hawaiian cowboy, the paniolo, is also a direct descendant of the vaquero of California and Mexico. By the early 1800s, cattle given by Captain George Vancouver to King Pai`ea Kamehameha of Hawaii had multiplied astonishingly and were wreaking havoc throughout the countryside. About 1812, John Parker, a sailor who had jumped ship and settled in the islands, received permission from Kamehameha to capture the wild cattle and develop a beef industry. This, began the tradition of the “Paniolos,” a word thought to derive from a Hawaiianized pronunciation of the word, Español. Many Hawaiian ranching families today still carry the names of the vaqueros who married Hawaiian women and made Hawaii their home.

Other nations

In addition to the Mexican vaqueros, the Mexican charro, the North American cowboy, and the Hawaiian paniolo, the Spanish also exported their horsemanship and knowledge of cattle ranching to the gaucho of Argentina, Uruguay, Paraguay and southern Brazil, the llanero of Venezuela, the huaso of Chile, and, indirectly (through the U.S.) to Australia. In Australia, which has a large ranch (station) culture, cowboys are known as stockmen and drovers, with trainee stockmen referred to as jackaroos and jillaroos.

The use of horseback riders to guard herds of cattle, sheep or horses is common wherever wide, open land for grazing exists. In the French Camargue, riders called “gardians” herd cattle. In Hungary, the csikós guard horses. The herders in the region of Maremma in Tuscany, Italy are called butteros.

In Canada, the ranching and cowboy tradition centers around the province of Alberta. The city of Calgary remains the center of the Canadian cattle industry and is called “Cowtown.” The Calgary Stampede which began in 1912 is the world’s richest cash rodeo. Each year, Calgary’s northern rival Edmonton, Alberta stages the Canadian Finals Rodeo, and dozens of regional rodeos are held throughout the province.

Cowgirls

There are few records mentioning girls or women driving cattle up the cattle trails of the Old West, even though women undoubtedly helped on the ranches, and in some cases ran them, especially when the men went to war. There is little doubt that women, particularly the wives and daughters of men who owned small ranches and could not afford to hire large numbers of outside laborers, worked side by side with men and thus needed to ride horses and be able to perform ranch work.

It was not until the advent of the Wild West shows that cowgirls came into their own. Their riding, expert marksmanship, and trick roping entertained audiences around the world. Women such as Annie Oakley became household names. By 1900, skirts split for riding astride, allowed women to compete with the men without scandalizing Victorian Era audiences.

The growth of the rodeo brought about another type of cowgirl—the rodeo cowgirl. In the early Wild West shows and rodeos, women competed in all events, sometimes against other women, sometimes with the men. Performers such as Fannie Sperry Steele rode the same “rough stock” and took the same risks as the men (and all while wearing a heavy split skirt that was still more encumbering than men’s trousers) and gave show-stopping performances at major rodeos such as the Calgary Stampede and Cheyenne Frontier Days.

Development of the modern cowboy

Over time, the cowboys of the American West developed a personal culture of their own, a blend of frontier and Victorian values that even retained vestiges of chivalry. Such hazardous work in isolated conditions also bred a tradition of self-dependence and individualism, with great value put on personal honesty, exemplified in their songs and poetry.

Today, the Texas and California traditions have merged to some extent, though a few regional differences in equipment and riding style still remain, and some individuals choose to deliberately preserve the more time-consuming but highly skilled techniques of the pure vaquero tradition. The popular “horse whisperer” style of natural horsemanship was originally developed by practitioners who were predominantly from California and the Northwestern states, clearly combining the attitudes and philosophy of the California vaquero with the equipment and outward look of the Texas cowboy.

On the ranch, the cowboy is responsible for feeding the livestock, branding and earmarking cattle, plus tending to animal injuries and other needs. The working cowboy usually is in charge of a small group or “string” of horses and is required to routinely patrol the rangeland in all weather conditions checking for damaged fences, evidence of predation, water problems, and any other issues of concern.

Cowboys also move the livestock to different pasture locations and herd them into corrals or onto trucks for transport. In addition, cowboys may do many other jobs, depending on the size of the “outfit” or ranch, the terrain, and the number of livestock. On a large ranch with many employees, cowboys are able to specialize in tasks solely related to cattle and horses. Cowboys who train horses often specialize in this task only, and some may “break” or train young horses for more than one ranch.

The United States Bureau of Labor Statistics collects no figures for cowboys. Their work is included in the 2003 category, Support activities for animal production, which totaled 9,730 workers with an average salary of $19,340 per year. In addition to cowboys working on ranches, in stockyards, and as staff or competitors at rodeos, the category includes farmhands working with other types of livestock (sheep, goats, hogs, chickens, etc.). Of those 9,730 workers, 3,290 are listed in the subcategory of Spectator sports, which includes rodeos, circuses, and theaters needing livestock handlers.

 

Goldfish


Goldfish is the common name for a freshwater fish, Carassius auratus, of the carp or minnow family, Cyprinidae, that is native to East Asia and has been domesticated and developed into many ornamental breeds for aquariums and water gardens.

One of the earliest fish to be domesticated—in China over 1,000 years ago (BAS 2007)—the goldfish remains one of the most popular aquarium fish. Over the centuries, through human creativity acting on the foundation of an original carp species, many color variations have been produced, some far different form the original “golden” color of the first domesticated fish. Diverse forms have also been developed. Beyond the aesthetic pleasure from such varieties, goldfish have also offered practical value in control of mosquitoes.

Description

A relatively small member of the Cyprinidae family, the goldfish is a domesticated version of a dark-gray/brown carp native to East Asia.

The Cyprinidae family is the largest family of freshwater fishes in the world, and may be the largest family of vertebrates (with the possible exception of Gobiidae) (Nelson 1994). Common names associated with various members of this family include minnow, carp, chub, and shiner. Nelson (1994) recognizes 210 genera and over 2,000 species in Cyprinidae, with about 1,270 species native in Eurasia, about 475 species in 23 genera in Africa, and about 270 species in 50 genera in North America. Particularly well-known species include the common carp and koi (Cyprinus carpio), goldfish (Carassius auratus), and zebra danio or zebrafish (Brachydanio rerio), the latter used extensively in genetic research (Nelson 1994).

Members of the Cyprinidae are characterized by pharyngeal teeth in one or two rows, with no more than eight teeth per row; usually thin lips, an upper jaw usually protrusible; and an upper jaw bordered only by premaxilla (Nelson 1994).

Goldfish, Carassius auratus, may grow to a maximum length of 23 inches (59 cm) and a maximum weight of 9.9 pounds (4.5 kg), although this is rare; few goldfish reach even half this size. The longest goldfish was measured at 47.4 cm (18.7 in) from snout to tail-fin end on March 24, 2003 in Hapert, The Netherlands (Guinness 2003). In optimal conditions, goldfish may live more than 20 years, but most household goldfish generally live only six to eight years, due to being kept in bowls.

If left in the dark for a period of time, a goldfish will turn lighter in color. Goldfish have pigment production in response to light. Cells called chromatophores produce pigments that reflect light, and gives coloration. The color of a goldfish is determined by which pigments are in the cells, how many pigments molecules there are, and whether the pigment is grouped inside the cell or is spaced throughout the cytoplasm. So if a goldfish is kept in the dark it will appear lighter in the morning, and over a long period of time will lose its color.

A group of goldfish is known as a troubling (SDZ 2007).

Life cycle and reproduction

Goldfish, like all cyprinids, lay eggs. They produce adhesive eggs that attach to aquatic vegetation. The eggs hatch within 48 to 72 hours, releasing fry large enough to be described as appearing like “an eyelash with two eyeballs.”

Within a week or so, the fry begin to look more like a goldfish in shape, although it can take as much as a year before they develop a mature goldfish color; until then they are a metallic brown like their wild ancestors. In their first weeks of existence, the fry grow remarkably fast—an adaptation born of the high risk of getting devoured by the adult goldfish (or other fish and insects) in their environment.

Some scientists believe goldfish can only grow to sexual maturity if given enough water and the right nutrition. If kept well, they may breed indoors. Breeding usually happens after a significant change in temperature, often in spring. In aquariums, eggs should then be separated into another tank, as the parents will likely eat any of their young that they happen upon. Dense plants such as Cabomba or Elodea or a spawning mop are used to catch the eggs.

Most goldfish can and will breed if left to themselves, particularly in pond settings. Males chase the females around, bumping and nudging them in order to prompt the females to release her eggs, which the males then fertilize. Due to the strange shapes of some extreme modern bred goldfish, certain types can no longer breed among themselves. In these cases, a method of artificial breeding is used called hand stripping. This method keeps the breed going, but can be dangerous and harmful to the fish if not done correctly.

Like some other popular aquarium fish, such as the guppies, goldfish and other carp are frequently added to stagnant bodies of water in order to reduce the mosquito populations in some parts of the world, especially to prevent the spread of West Nile Virus, which relies on mosquitoes to migrate (Alameda). However, the introduction of goldfish has often had negative consequences for local ecosystems (Winter 2005).

Behavior

Behavior can vary widely both because goldfish are housed in a variety of environments, and because their behavior can be conditioned by their owners. A common belief that goldfish have a three-second memory has been proven false (Henderson 2003). Research has demonstrated that goldfish have a memory-span of at least three months and can distinguish between different shapes, colors and sounds (Henderson 2003). They were trained to push a lever to earn a food reward; when the lever was fixed to work only for an hour a day, the fish soon learned to activate it at the correct time (Henderson 2003; Lloyd and Mitchinson 2006).

Scientific studies done on the matter have shown that goldfish have strong associative learning abilities, as well as social learning skills. In addition, their strong visual acuity allows them to distinguish between different humans. It is quite possible that owners will notice the fish react favorably to them (swimming to the front of the glass, swimming rapidly around the tank, and going to the surface mouthing for food) while hiding when other people approach the tank. Over time, goldfish should learn to associate their owners and other humans with food, often “begging” for food whenever their owners approach. Auditory responses from a blind goldfish proved that it recognized one particular family member and a friend by voice, or vibration of sound. This behavior was very remarkable because it showed that the fish recognized the vocal vibration or sound of two people specifically out of seven in the house.

Goldfish also display a range of social behaviors. When new fish are introduced to the tank, aggressive social behaviors may sometimes be seen, such as chasing the new fish, or fin nipping. These usually stop within a few days. Fish that have been living together are often seen displaying schooling behavior, as well as displaying the same types of feeding behaviors. Goldfish may display similar behaviors when responding to their reflections in a mirror.

Goldfish that have constant visual contact with humans also seem to stop associating them as a threat. After being kept in a tank for several weeks, it becomes possible to feed a goldfish by hand without it reacting in a frightened manner. Some goldfish have been trained to perform various tricks.

Goldfish have behaviors, both as groups and as individuals, that stem from native carp behavior. They are a generalist species with varied feeding, breeding, and predators avoidance behaviors that contribute to their success in the environment. As fish they can be described as “friendly” towards each other, very rarely will a goldfish harm another goldfish, nor do the males harm the females during breeding. The only real threat that goldfish present to each other is in food competition. Commons, comets, and other faster varieties can easily eat all the food during a feeding before fancy varieties can reach it. This can be a problem that leads to stunted growth or possible starvation of fancier varieties when they are kept in a pond with their single-tailed brethren. As a result, when mixing breeds in an aquarium environment, care should be taken to combine only breeds with similar body type and swim characteristics.

Wild, in native environments

Goldfish natively live in ponds, and other still or slow moving bodies of water in depths up to 20 meters (65 feet). Their native climate is subtropical to tropical and they live in freshwater with a pH of 6.0–8.0, a water hardness of 5.0–19.0 dGH, and a temperature range of 40 to 106 °F (4 to 41 °C), although they will not survive long at the higher temperatures. They are considered ill-suited even to live in a heated tropical fish tank, as they are used to the greater amount of oxygen in unheated tanks, and some believe that the heat burns them. However, goldfish have been observed living for centuries in outdoor ponds in which the temperature often spikes above 86 °F (30 °C). When found in nature, goldfish are actually an olive green, greenish brown, or grayish color.

In the wild, the diet consists of crustaceans, insects, and various plants. They can be quite beneficial through consuming pest species, such as mosquitoes.

Fancy goldfish released into the wild are unlikely to survive for long as they are handicapped by their bright fin colors; however, it is not beyond the bounds of possibility that such a fish, especially the more hardy varieties such as the Shubunkin, could survive long enough to breed with its wild cousins. Common and comet goldfish can survive, and even thrive, in any climate in which a pond for them can be created. Introduction of wild goldfish can cause problems for native species. Within three breeding generations, the vast majority of the goldfish spawn will have reverted to their natural olive color. Since they are carp, goldfish are also capable of breeding with certain other species of carp and creating hybrid species.

Domesticated, in ponds

Goldfish are popular pond fish, since they are small, inexpensive, colorful, and very hardy. In a pond, they may even survive if brief periods of ice form on the surface, as long as there is enough oxygen remaining in the water and the pond does not freeze solid.

Common goldfish, London and Bristol shubunkins, jikin, wakin, comet, and sometimes fantail can be kept in a pond all year round in temperate and subtropical climates. Moor, veiltail, oranda, and lionhead are only safe in the summer.

Small to large ponds are fine for keeping goldfish, although the depth should be at least 80 centimeters (30 inches) to avoid freezing. During winter, goldfish will become sluggish, stop eating, and often stay on the bottom. They will become active again in the spring.

A filter is important to clear waste and keep the pond clean. Plants are essential as they act as part of the filtration system, as well as a food source for the fish.

Compatible fish include rudd, tench, orfe, and koi, but the latter will require specialized care. Ramshorn snails are helpful by eating any algae that grows in the pond. It is of great importance to introduce fish that will consume excess goldfish eggs in the pond, such as orfe. Without some form of population control, goldfish ponds can easily become overstocked. Koi may also interbreed to produce a sterile new fish.

In aquariums

Goldfish are usually classified as a coldwater fish and can live in unheated aquariums. Like most carp, goldfish produce a large amount of waste both in their feces and through their gills, releasing harmful chemicals into the water. Build-up of this waste to toxic levels can occur in a relatively short period of time, which is often the cause of a fish’s sudden death. It may be the amount of water surface area, not the water volume, that decides how many goldfish may live in a container, because this determines how much oxygen diffuses and dissolves from the air into the water; one square foot of water surface area for every inch of goldfish length (370 cm²/cm). If the water is being further aerated by way of water pump, filter, or fountain, more goldfish may be kept in the container.

Goldfish may be coldwater fish, but this does not mean they can tolerate rapid changes in temperature. The sudden shift in temperature—for example at night in an office building where a goldfish might be kept in a small office tank—could kill them. Temperatures under about 10 °C (50 °F) are dangerous to goldfish. Conversely, temperatures over 25 °C (77 °F) can be extremely damaging for goldfish and is the main reason why tropical tanks are not desirable environments.

The popular image of a goldfish in a small fishbowl is an enduring one. Unfortunately, the risk of stunting, deoxygenation, and ammonia/nitrite poisoning caused by such a small environment means that this is hardly a suitable home for fish, and some countries have banned the sale of bowls of that type under animal rights legislation.

The supposed reputation of goldfish dying quickly is often due to poor care among uninformed buyers looking for a cheap pet. The true lifespan of a well-cared-for goldfish in captivity can extend beyond 10 years.

Goldfish, like all fish that are kept as pets, do not like to be petted. In fact, touching a goldfish can be quite dangerous to its health, as it can cause the protective slime coat to be damaged or removed, which opens the fish’s skin up to infection from bacteria or parasites in the water.

While it is true that goldfish can survive in a fairly wide temperature range, the optimal range for indoor fish is 68 to 75 °F (20 to 23 °C). Pet goldfish, as with many other fish, will usually eat more food than it needs if given, which can lead to fatal intestinal blockage. They are omnivorous and do best with a wide variety of fresh vegetables and fruit to supplement a flake or pellet diet staple.

Sudden changes in water temperature can be fatal to any fish, including the goldfish. When transferring a store-bought goldfish to a pond or a tank, the temperature in the storage container should be equalized by leaving it in the destination container for at least 20 minutes before releasing the goldfish. In addition, some temperature changes might simply be too great for even the hardy goldfish to adjust to. For example, buying a goldfish in a store, where the water might be 70 °F (approximately 21 °C), and hoping to release it into your garden pond at 40 °F (4 °C) will probably result in the death of the goldfish, even if you use the slow immersion method just described. A goldfish will need a lot more time, perhaps days or weeks, to adjust to such a different temperature.

History

Many sources claim that crucian carp (Carassius carassius) is the wild version of the goldfish. Research by Dr. Yoshiichi Matsui, a professor of fish culture at Kinki University in Japan, suggests that there are subtle differences that demonstrate that while the crucian carp is the ancestor of the goldfish, they have sufficiently diverged to be considered separate species (Pearce 2006).

Others hold that the wild form of the goldfish (Carassius auratus auratus) is Carassius auratus gibelio, or rather Carassius gibelio with auratus as the subspecies. The different species can be differentiated by the following characteristics:

  • C. auratus has a more pointed snout while the snout of a crucian carp is well rounded.
  • The wild form of the goldfish C. auratus gibelio or C. gibelio often has a gray/greenish color, while crucian carps are always golden bronze.
  • Juvenile crucian carp (and tench) have a black spot on the base of the tail, which disappears with age. In C. auratus this tail spot is never present.
  • C. auratus have fewer than 31 scales along the lateral line while crucian carp have 33 scales or more.

The goldfish was first domesticated in China (BAS 2007). During the Tang Dynasty, it was popular to dam carp in ponds. It is believed that as the result of a dominant genetic mutation, one of these carp displayed gold (actually yellowish orange) rather than silver coloration. People began to breed the gold variety instead of the silver variety, and began to display them in small containers. The fish were not kept in the containers permanently, but would be kept in a larger body of water, such as a pond, and only for special occasions at which guests were expected would they be moved to the much smaller container (BAS 2007).

In 1162, the empress of the Song Dynasty ordered the construction of a pond to collect the red and gold variety of those carp. By this time, people outside the royal family were forbidden to keep goldfish of the gold (yellow) variety, yellow being the royal color. This probably is the reason why there are more orange goldfish than yellow goldfish, even though the latter are genetically easier to breed (WetPetz 2004).

The occurrence of other colors was first recorded in 1276. The first occurrence of fancy tailed goldfish was recorded in the Ming dynasty. Around the sixteenth century or beginning of the seventeenth century, goldfish were introduced to Japan (BAS 2007), where the Ryukin and Tosakin varieties were developed.

In 1611, goldfish were introduced to Portugal and from there to other parts of Europe (BAS 2007). Goldfish were first introduced to North America around in the mid to late 1800s and quickly became popular in the United States (Brunner 2003; BAS 2007).