New Age Movement

New Age Movement, broad-based amalgam of diverse spiritual, social, and political elements with the common aim of transforming individuals and society through spiritual awareness. The New Age is a utopian vision, an era of harmony and progress. Comprising individuals, activist groups, businesses, professional groups, and spiritual leaders and followers, the movement brought feminist, ecological, spiritual, and human-potential concerns into the mainstream in the 1980s, creating a large market in various countries for books, magazines, audio- and videotapes, workshops, retreats, and expositions on the subject, as well as for natural foods, crystals, and meditation and healing aids.

Often seen as resurgent paganism or Gnosticism, the modern movement has more recent roots in 19th-century spiritualism and in the 1960s counter-culture, which rejected materialism in favour of Eastern mysticism and preferred direct spiritual experience to organized religion. Techniques for self-improvement and the idea that the individual is responsible for and capable of everything from self-healing to creating the world, have found applications in health care and counselling as well as in sports, the armed forces, and corporations, and have provoked debate in religious and other circles.

Holistic thinking has influenced attitudes about medicine, the environment, the family, work, regional planning, and world peace, among others. Ideas frequently associated with the New Age movement include anthroposophical teachings, inner transformation, reincarnation, extraterrestrial life, biofeedback, chanting, alchemy, yoga, transpersonal psychology, shamanism, martial arts, the occult, astrology, psychic healing, extrasensory perception, divination, astral travel, acupuncture, massage, tarot, Zen, mythology, and visualization.




Hairdressing, arranging or otherwise altering the hair for enhanced beauty, for practicality, or to indicate status. The process may involve cutting, plucking, curling, braiding, bleaching, dyeing, powdering, oiling, or adding false hair (such as a wig or fall) or ornaments. Hairstyles have played an important part in the cultural identity of men and women since prehistoric times.


Members of the ancient Mesopotamian and Persian nobility curled, dyed, and plaited their long hair and beards, sometimes adding gold dust or gold and silver ornaments. Both Egyptian men (who were beardless) and women shaved their heads for coolness. On occasion, they wore heavy black wigs and often a cone of perfumed oil on top of the head. The Hebrews were prohibited by biblical law from cutting their hair or beards. Thus, following ancient tradition, Orthodox Jewish men today, as through the centuries, wear their hair and beards long. After the exile, in the 1st-century ad, Orthodox women, upon marriage, cropped their hair and wore wigs, a custom which to some extent is still practised.

Among the ancient Greeks, boys under the age of 18 generally wore their hair long; except for the Spartans, men were clean-shaven and wore their hair short, curled into small ringlets. Greek women wore their long hair parted in the middle and drawn back into a knot or chignon. Sometimes it was dyed or dusted with colour or twined with ribbons. The curling of hair was so popular in Athens that it gave rise to the first professional hairdressers. In Rome, men were also, generally, beardless and short-haired. Roman women in republican days wore their hair in simple styles; those of the empire adopted elaborately curled and braided coiffures, often filled out with blonde hair taken from German prisoners of war. The Germanic and Celtic peoples of northern Europe sported beards and long hair; short hair was a mark of slavery or of punishment.


In Islamic countries, both men and women continue to follow tradition, covering their hair in public under a headcloth, turban, fez, or veil. Followers of the Sikh religion do not cut their hair, the men wearing it in a tight bun on the top of the head, under a turban. Indian women traditionally wore their hair in long plaits. In China and Japan men formerly shaved the front of the head and tied the back hair in a pigtail. Chinese women combed their hair back into a low knot, and Japanese women—before the 17th century—wore their long hair unbound. Subsequently, they wore the hair drawn up off the neck and elaborately arranged, pomaded, and ornamented with ribbons, hairpins, or other objects. Warriors of some Native North American peoples traditionally shaved their heads, leaving a centre tuft of hair. Intricate patterns of braiding and beading decorate the hair of sub-Saharan African women, and in the early 1980s, an adaptation of this style became fashionable with black women in many parts of the world.


In Europe, in about the 8th century, the tonsure, a clean-shaven circular patch on the crown of the head, was adopted by members of Christian monastic orders to indicate the dedication to the service of God. Roman Catholic priests continued to wear the tonsure until 1972. By the 9th century, European noblemen wore their hair cropped at the neck, and women’s hair was long and generally plaited; married noblewomen, following the Church’s stipulations about modesty, covered these plaits with a veil. In the later Middle Ages and during the Renaissance, men’s hair was generally worn short and rolled under at the neck or above the ears. Fashionable women of the 13th and 14th centuries coiled their plaits over their ears or bundled them up at the back of the head, in both cases covering them with gold net cauls or with linen drapery, surmounted by a veil. In the courts of 15th-century France and the Netherlands, women plucked their foreheads to give an effect of added height and combed the rest of their hair under huge wimples draped with veiling. Italian women of the time set off their plaited, curled, and coiled hair with neat jewelled bands or caps. In Elizabethan, England noblewomen frizzed and powdered their front hair over hoops or pads and netted up the back hair.

In the early 17th century fashionable European men wore long flowing locks, often curled, piled, and perfumed. Trim moustaches and short, pointed Vandyke beards (so called from the style shown in portraits by the Flemish painter Sir Anthony van Dyck) were in vogue. Fashionable women wore a fringe across the forehead and puffs of hair or long curls at the sides, often incorporating false hair and threaded with ribbons and pearls. The back hair was coiled up on the head. In the late 17th century, men began to wear large curled wigs over close-cropped hair—a fashion introduced by Louis XIII of France to hide his baldness, and continued by Louis XIV, who wore a towering wig to make him appear taller. Towards the end of the century, these men’s wigs were matched by women’s headdresses, consisting of great superstructures of wire, frills, lace, and ribbons.


In the 18th century, men’s wigs, now smaller, were customarily whitened with powder and tied behind with a black ribbon. Women initially wore their hair very short, and powdered, curled or waved. By the 1770s they had adopted a style of combing their hair up into lofty constructions supported by wire, pads, and false hair; it was powdered and decorated with such ornaments as flowers, ribbons, plumes, jewels, hats or even miniature replicas of objects such as coaches, windmills, or a warship.

With the French Revolution, hairstyles became simpler. Thereafter, men have generally worn their hair short, with recurrent periods when beards become fashionable. Women’s styles moved from the simplicity of the Empire style—heads encircled by a fillet in the ancient Greek mode—to Victorian complexities of curls, ringlets, fringes, and chignons.


After World War I, it became fashionable for women to wear their hair bobbed, and often permanently waved. Since then, increasing numbers of women regularly go to professional hairdressers for cutting, styling, curling, and dyeing according to the latest styles. For men, the closely cropped crew cuts adopted for practical reasons during World War II gave way to longer hair and untrimmed beards. Shorter, shaped hairstyles and neatly trimmed beards reappeared in the 1970s; by the 1980s permanent waving was not uncommon among men, and moustaches were more in evidence than beards.

Credited Images:it.aliexpress


News and Current Affairs


News and Current Affairs, reporting and analysis of events by radio and television programmes, and on the Internet. The two terms, “news” and “current affairs”, reflect old differences in the way that broadcasting used to treat topical matters, differences that barely survive in today’s advanced radio and television systems. In early radio, before television, the news was plain, restricted to what newspeople call “hard fact”. Newsreaders gave carefully scripted accounts of main undisputed facts of politics, wars, accidents, and other significant events. Facts were not interpreted or analysed.


In traditionalist Europe, that narrow concept of news satisfied the desires of governments to control the new radio medium in the public interest. They believed that without controls broadcasting could do as much harm as good. The United States, however, was different because broadcasting development was driven more by commercial considerations and by a stronger belief in the pre-eminence of freedom of expression. News broadcasting there soon developed a freer style than in Europe and its colonies.

Regardless of the degree of control, the inadequacy of news limited to plain fact became evident. A news bulletin told people the news but did not help them understand it. It did not adequately make them aware of the issues, of the “news behind the news”. To compensate, the concept of current affairs was invented. Though close to the news in the subject area, it was separate from it. News continued to be strictly factual, while current affairs delivered a mix of fact, comment, opinion, analysis, and interpretation in interviews, commentaries by experts, and feature reports. The change advanced more in European “public service” broadcasting than in American commercial broadcasting.

An important factor in the free world that is still strong today was the belief that news broadcasting should be impartial. It should not take sides in matters of public dispute. It should, for instance, report industrial strikes without favouring the employers or strikers. Similarly, political reporting should not side with any party. Impartiality was encouraged by the dependence of broadcasting on a public resource—the frequencies, sound waves that carry signals from transmitters to radio sets. Frequencies are allocated to prevent a jumble of programmes from different stations on the same frequencies in the same areas at the same time. It was reasoned that as broadcasting used a public resource it should serve all of the public. To do so, it had to be impartial.

Such reasoning does not apply to newspapers, which do not depend on any public resource in the same way. Publication of one newspaper does not obstruct or prevent publication of another, although competition for readers might cause one to fail, in the same way as competition for listeners among radio stations. Thus, newspapers continued to be free to take sides while regulated broadcasting was not.

In the United States, the tradition of independent journalism encouraged its lightly regulated radio stations to standards of impartial reliability as strongly as heavier regulation achieved that end in European liberal democracies. Authoritarian regimes in Europe and elsewhere strictly controlled all the news media and used them for propaganda. Some other countries had a degree of newspaper freedom, while broadcasting was made to serve “the state”, which usually meant the purposes of government.

Over the years, broadcasters in freethinking countries developed more sophisticated ideas of news and current affairs. The two approaches moved closer together, overlapped, and finally intermingled. The new ways of news broadcasting aimed to make the news comprehensive and comprehensible. Broadcasters came to believe that a news programme should give the news, the meaning of the news, and relevant comment on the news in whatever ways programme-makers decided was best. A radio or television news programme might start with a bulletin of hard news reports on various events, in summary, or at length. The same programme could then move to a sequence of interviews with people in the news and to reports that were more discursive than in the bulletin. A differently constructed news programme might tell the facts of one event, explain them in another report immediately following, perhaps by a specialist correspondent, and include leading comment from people involved in the event, before dealing in similar ways with the next most important or most interesting story. Many variations are possible. The length of programme and the nature of its parts depend on several factors: on the time of day, shorter news items being more convenient for audiences at busier times; on audience profile in terms of age, sex, and socio-economic group; on programme policy, a talk station favouring longer news than a station mainly for music; and on whatever news is available.


Interviews became important. Broadly, they have two aims: to elicit facts and to seek comments—functions that often merge. Interviews for facts are prominent when newsworthy events have just occurred. Viewers and listeners hear police officers, for example, giving facts about newly committed crimes, or rescuers describing what has happened in accidents and disasters. Interviews for comments involve experts, public figures, and other people in the news. Their purpose is sometimes to explain the significance of events. With public figures fixing public policy, the purpose is to press them to justify their decisions. In early broadcasting, such interviews were usually deferential. Interviewers showed well-mannered respect for people in public office. Now, they are as likely to interrogate interviewees. This has caused politicians in democracies to complain that television and radio have supplanted parliament as the forum of national debate: “trial by media”, they say. In turn, broadcasters argue that experience and concern for public image make politicians evasive. In the United States, the sound bite—a cogent, very short comment, used repeatedly in news programmes—is held to have ousted thoughtful exposition, although public figures do explain themselves at length on prime time talk shows. National culture also influences interviewing style. Interviewing style can vary between nations, perhaps showing the influence of prevailing cultural trends.


Technology assisted the transition from rigidly separated news and current affairs broadcasting to modern news programming that has abundant material. Difficult-to-use wax discs for recording interviews and reporters’ dispatches gave way on the radio to manageable magnetic tape. On television, cheap, easily edited videotape replaced expensive film that had to be developed before viewers could see it. Improved telephones and landline circuits from distant studios to the news transmission studio encouraged programmes to use their own reporters instead of standard news agency copy. Cumbersome, costly outside broadcast vehicles—mobile studios—sent to the scene of only the biggest stories were superseded by smaller news broadcast vehicles, saloon cars with radio transmission equipment. These can travel more readily, giving radio reporters more opportunities to beam their news directly into the news studio and, if necessary, live into homes and offices. Electronic news gathering (ENG) in television allowed its reporters to do the same with pictures and sound. Communications satellites also improved the quality of pictures and sound from distant places. More news was reported more quickly.

Portable telephones, lightweight video cameras, and portable satellite transponders (devices that both receive and send out signals) have further increased quantity and speed. Reporters send pictures and their account of the facts directly to satellite and on to studios in London, Washington, Paris, Sydney, and all points on the globe. Reporting the news from any location can now be instant.

As a result, editors of news programmes have many more stories to choose from and much more material to illustrate them. Editors first decide which events they would like covered so that reporters with cameras and sound equipment are allocated to them. Editors also receive material on events they did not know were happening or were going to happen. For their programmes, they decide what to use, in what form, how they are to be edited, to what length, in what order, and whether the reports should be live or recorded. They also decide which stories are most important or most interesting, and how their locality, their country, their region, and the world will be presented.

With more news to use, radio and television have much more news programmes than in days when news travelled slowly. Some stations have news all the time, 24 hours a day. The explosion of news will continue. Events in many parts of the world are under-reported or not reported at all, sometimes because they are too remote, sometimes because of restrictive governments, eager to hide problems, suppress information and deter reporters. However, political change, the demand for news, and easy technology combine to break down barriers and to encourage programme producers to explore more and more events in more and more parts of the world.

Some critics say that television often uses pictures simply because they exist or because they are exciting, not because they are important. They argue that editors neglect more important events for which there are no pictures or where the pictures lack action. Others see the situation in a different light: the growth of news means that the world is better informed and, while many events reported are relatively trivial, there are many serious news programmes attending to many significant events.


The expansion of news and current affairs journalism has continued with the emergence of the Internet. Since the early 1990s, when the Internet began to become a mass medium, it has developed into a steadily more important platform for journalism. The Internet is the world’s first truly global news medium, in that online journalism is accessible to anyone, anywhere on the planet, with a personal computer and an Internet connection.

In 1997 there were only 700 online news sites in the world. As of 2007, there were millions, and nearly every news organization has a website. In addition to sites operated by established news organizations, such as the BBC and CNN, there is millions more run by individual journalists, and by what are now called “bloggers”. Blogs are regularly updated online bulletins, often containing news and comment on the issues of the moment. Many are amateurish and ephemeral, read by only a handful of like-minded bloggers. Others, such as that written by Salam Pax, the “Baghdad Blogger” during the invasion of Iraq in 2003, become essential reading all over the world, supplying the traditional news media with stories and analyses.

Blogging is part of a broader trend towards “citizen journalism”, in which individuals armed with video cameras and mobile phones generate material for traditional news and current affairs outlets. Coverage of the Asian tsunami of Boxing Day 2004, for example, featured many video clips taken by people on the spot, then uploaded to the editorial offices of the BBC and others for incorporation into news bulletins.

The rise of citizen journalism, also known as user-generated content, has benefited traditional news and current affairs broadcasting by making available more of the raw material of news. In general, therefore, the trend has been welcomed by the news media. However, concerns have been raised about the quality controls on this kind of material. How can the accuracy and objectivity of user-generated content be guaranteed, in the absence of professional skills and editorial safeguards? This is an issue that traditional news and current affairs media are now grappling with, in an effort to harness the potential of new technologies like the Internet, while preserving the perceived reliability of their programmes.

The Internet has also fuelled what some observers call a “commentary explosion”, in which more and more of the content of journalism is not factual reportage or balanced analysis and commentary on the news, but the rumour, gossip, polemic, and bias. Again, news media face the issue of trying to filter out worthwhile commentary and analysis for inclusion in their news and current affairs programmes. The greatest challenge facing news and current affairs journalism today is not the quantity of material available, and the number of platforms from which journalism can be distributed, but ensuring the quality of what is produced.

Additional material by Brian McNair, Professor of Journalism and Communication, University of Strathclyde. Author, Cultural Chaos: Journalism, News, and Power in a Globalised World.

Contributed By:
John Wilson


Men’s Clothing

Men wore breeches and hose (short trousers and stockings). The relative length of the one compared to the other tended to blur the distinction between the two, the hose having become so long in the High Gothic period as almost to eliminate the breeches. Until the advent of knitted material, almost unknown in the Middle Ages, hose was made of linen or woollen cloth cut to shape for a relatively tight fit. It is unthinkable that they could have presented the smooth appearance, subsequently achieved by knitted fabrics, shown in pictures of the period. In the 1100s, hose reached midthigh and was made to cover short breeches or drawers. Earlier, breeches worn by the wealthy were cut narrower and those of labourers fuller, and both were usually cross-gartered below the knee.

Clothing in the early 1100s was worn long, and the overtunic was replaced by the bliaut, a garment imported from the Orient. Everything, including the sleeves, was long, full, and trailing. Men’s clothing in the remainder of the 1100s and during the 1200s displayed variations of length, fullness, and decoration and different names for what were essentially the same garments. A notable change was that the hood became a separate garment.

Later in the period, the hood—with its pointed end, the liripipe, and short shoulder cape—became a hat; the hole originally intended for the face was pulled over the head, and the extended liripipe was wrapped around the head in turban fashion. Later still, the hat was hung over the shoulder by the liripipe and worn as a badge; its ultimate manifestation became the cockade on the livery hat of the 1800s or the doorman’s hat of the 1900s. Another even more curious derivation of the hood is the small tab sewn in the back of an English barrister’s gown, an appendage dating from the time when a client would drop money in the hat if a case was thought to be going well.

In the 1300s the tunic was narrowed and shortened to a more tailored look and evolved into what came to be called the doublet. Over the doublet the old overtunic, now with a collar and called a cotehardie, was still worn. The houppelande, an outer garment with a long, full body and wide, flaring sleeves, was worn until the end of the century and survived into the 1400s and 1500s in the dress of the professional classes and older men. It survives in the academic and legal gowns and robes of today.

The doublet developed into a fully tailored, frequently padded garment, which in varying forms survived as the basic male outer garment through the middle of the 1600s. Its modern derivation is the waistcoat or vest worn with a suit.



Renaissance Clothing

Clothing typical of the Renaissance evolved in Italy and was brought to the rest of Europe following the invasion of Italy in 1494 by Charles VIII of France. It is unclear why the rather simpler styles of Italy evolved independently from those of the rest of Europe, but seems likely to have been a result of the warmer Italian climate. The low-necked tunic and chemise for men and the similarly simple and low-necked gowns of the women, called Juliet gowns, had a very rapid if the short-lived effect on the evolution of European clothing in general. By the 1620s, simplicity had vanished, but the vertical look of medieval garments had completely given way to the horizontal effects of Renaissance dress. Concurrently with this rapid change in style, the craze for slashing burst upon Europe. Probably originating in southern Germany, and surviving well into the 1600s, this fashion involved cutting slits in the outer fabric and pulling the lining fabric through the hole to create a decorative contrast.

Perhaps the most interesting development of this period was the use, or at least exposure, of clean linen chemises, by both men and women. Once exposed, the chemise was, of course, decorated; the lace edges and frills at the neck and sleeves developed in less than 50 years into the starched and elaborate ruffs worn for another 100 years. Starched or soft, these collars developed into the late fall, or jabot, and eventually became the cravat and then, finally, the necktie.

The only basic change in men’s clothing during the Renaissance, other than in decorative emphasis, was the lengthening of breeches, which were, as always, elaborately decorated once exposed. Women, on the other hand, endured increasingly restrictive garments. Early in the Renaissance there appeared a long, rigid, almost cone-shaped corset reaching well below the waist to a V in front and making little or no concession to the woman’s natural form. Corsets had been used before to emphasize but never to distort the feminine form. The breasts were forced upwards above the corset and remained there until fashions changed with the French Revolution in 1789. Styles varied enormously from that time, but the basic emphasis on distortion did not. Some of the rigidity was relieved when whalebones replaced metal stays in clothing, but in general, discomfort was increased because of the widespread practice of artificially shaping the skirts with underpinnings varying from bags of bran to elaborate metal cages.

While basic garments remained much the same as they had been in the Middle Ages, a relatively natural look was replaced with elaborate shapes, lacing, padding, and rigidity. This is ascribed to the extreme formality of the tradition-bound Habsburg courts of the Holy Roman Empire, especially those in Austria and Spain. The rare attempts to overthrow this rigidity in European fashion were not followed in the Spanish court, as evidenced by the huge panniered skirts shown in the royal portraits by the Baroque painter Diego Velázquez.

More images of women’s clothes style from South Korean:


Women’s Clothing

Women also adopted the bliaut, as well as another Oriental garment with long wide sleeves, the Oriental surcoat. The bliaut, made of fine material crimped or pleated, was long, full, and trailing like the garments of men. A new development of the period was an early form of the corset that emphasized the female figure. Throughout the Middle Ages, a woman’s ankles were never exposed to view. Indeed, through most of the period, skirts fell long on the floor in front, possibly to insulate the wearer against cold and draughts while she sat in the chilly home at a time before the invention of central heating. Skirts were carried in front of the body when walking. This led to a feminine posture characteristic of the Middle Ages—a rather stately leaning-back carriage of the body, emphasized in the later centuries by fantastic, tall headwear and trailing veils worn over long trains.

Until the 1400s, women’s garments were less extravagantly shaped than men’s, the clothing being tight-fitting and full-skirted with tight sleeves. Over the gown, a cotehardie and then the sideless gown was worn. Early in the period, the hair was veiled in a wimple, a cloth draped over the head and around the neck up to the chin. In cold weather and for state occasions a very full, three-quarter-round or even full circular cloak was worn. With the abandonment of the simple, an even more fantastic and elaborate style of headgear developed. At first, width was emphasized, followed by an emphasis on height, with results that were subsequently equalled only by the high wigs and deliberately representational head adornment of the late 1700s.

In the 1300s, women’s clothing, like men’s, became tighter-fitting and more tailored and, in the 1400s, more elaborately fitted and padded. New and elaborate methods of weaving were also developed in the 1400s and a whole range of new fabrics

Images of Korean women’s modern cloth style:


Film Studios

Film Studios, buildings that house film production units, or the permanent companies that produce films. In the former sense, film studios date back to the beginning of cinema. The first film studio was built by the Edison Company in New Jersey, United States, in 1893, but the standard pattern for the next couple of decades was set by the studio Georges Méliès had built at the end of the 19th century for his film-making. This was derived from the standard design already used for professional stills photography. It resembled a gigantic greenhouse, with three walls and the roof made of sheets of glass. The fourth wall was solid, and the glass used for the walls and ceiling had a ripple, or prismatic, finish to diffuse the light. Muslin blinds could be rolled across under the ceiling and also over the transparent walls to further soften the sunlight on very bright days. Nearly all film production companies used similar studios to film interior scenes on constructed sets within them. After a few years, the diffused sunlight through the glass roof came to be supplemented with extra artificial light from arc lights and racks of mercury-vapour tube lights. This was done to enable film production to continue on very overcast days, and also to give better three-dimensional modelling to the figures of the actors.

During World War I the big American film companies began to black out their studios and to film completely under artificial light, as this gave greater control and uniformity in the lighting. Studios were also fitted with hanging galleries over the area where the sets were built, to give a convenient place for some of the extra lights now in use. The rest of the world followed these practices after a few years. With the coming of sound films, most studios were soundproofed to exclude external noise.

Within a film company, the shooting areas under cover, and the buildings housing them, are also referred to as “stages”. The open areas within a film company’s grounds that are used for film production are called the “backlot”.

In the latter sense of “film studios”, the word “studio” is also used by transference to mean a permanent company that arranges a continuous production programme of films solely using its own production facilities, and also distributes these films exclusively to film exhibitors. From the 1920s through to the 1940s, this description was applied to the seven large American film companies—Metro-Goldwyn-Mayer (MGM), Warner Bros., Paramount, 20th Century-Fox, Universal, RKO, and Columbia—that had large numbers of actors, directors, and technicians under long-term contract making films in their own studios, and distributing them to chains of cinemas they owned in the United States and elsewhere. This “vertically integrated” system was broken down by American government action at the end of the 1940s, when companies owning chains of cinemas were separated from producing and distributing companies by law. However, Universal, Columbia, 20th Century-Fox, Disney, and Warner Bros. still distribute films, own studios, and produce some of the films made in these studios.

Similar studio complexes were established in other countries during the first half of the 20th century. One of the largest outside the United States was the UFA studios at Babelsberg near Berlin. In Britain numerous studios were set up in and around London, among them Pinewood, which became the headquarters of the Rank Organisation, the ambitious construction by Alexander Korda at Denham, and the small but highly influential Ealing Studios. Notable French studios included those at Vincennes and Billancourt, both on the outskirts of Paris, and the Victorine studios in Nice, while in Italy the huge Cinecittà complex was constructed near Rome under the auspices of Benito Mussolini. In Czechoslovakia the Barrandov studios in Prague were reckoned among the most modern and best equipped in Europe. The Soviet cinematic authority, Goskino (later Sovkino) controlled major production centres near Moscow and Kiev (now in Ukraine). In Japan film production concentrated around the cities of Tokyo and Kyoto, and the main studios were those of Nikkatsu (later Daiei), Toho, and Shochiku. The other chief production centre in Asia was Bombay (now Mumbai), home of Hindu-language cinema, where the countless—and often short-lived—studios were lumped together under the flippant collective name of “Bollywood”.

As in the United States, in most countries the centralized power and organization of the studios steadily diminished during the latter half of the 20th century. By the end of the century continuous production by a single company with full-time staff in a single studio, such as had been the norm in the Hollywood “studio era” of 1920-1950, had become the exception. In Britain, France, and almost everywhere outside the United States the studio complexes had devolved into production facilities, offering fully equipped, four-walled stages to be rented out to whichever production company needed them on an ad hoc basis. This trend began in the 1950s and 1960s, when small production companies started taking advantage of the independence and freedom from overheads gained by not running a permanent studio. (To some extent this had always happened—alone among the Hollywood majors, United Artists had never owned its own studio, but had rented space from the other majors when needed.)

In recent years, alongside the burgeoning power of stars and agents (often working in tandem), much of the most interesting American film work has come from the smaller outfits such as Orion, Lion’s Gate, Miramax, and DreamWorks SKG, generally producing independently but relying on the facilities and distribution muscle of the major studios. Some of these companies, such as Orion, have struggled to survive; others have sought safety by entering into a semi-autonomous alliance with one of the big players, the most prominent example being Miramax in its often troubled relationship with Disney. In some cases, independent film-makers have aspired to establish their own production facilities, mini-studios in their own right—occasionally with success, such as George Lucas with his Lucasfilm complex in northern California, but more often, like the example of Francis Coppola with American Zoetrope, ending in ignominious collapse. Similarly ambitious European ventures, like the British Goldcrest or the Anglo-French-Dutch combine PolyGram, have also proved disappointing. For the moment, the survivors of the old studio system remain the leading players in the game.

Reviewed By:
Philip Kemp




Modelling, the business of displaying clothing, jewellery, cosmetics, accessories, sundry products of the fashion industry, or other items, as a live mannequin, whether in life or photographically. Models may also sometimes be employed to be photographed in their own right, for artistic or other purposes.

Among the roles for fashion models are in runway (catwalk) shows or showroom presentations, at which a designer’s collection is presented to the world’s media and fashion buyers. Other important areas of work include photographic editorial work for magazines and newspapers and exhibition stand modelling at trade shows. By far the most lucrative (and sought-after) work in modelling is to be photographed for designer catalogues and major advertising campaigns, whether for fashion, cosmetics, or perfumery, or even an unrelated area.


All models are booked through an agency, usually at the location serving the client’s market. The agency typically takes a booking fee plus a percentage of the model’s fee from the client and is responsible for collecting the fees on the model’s behalf. Because of modelling, like fashion is a global industry, it is usual for a model to be represented by a different agency for each major market. The agency with the first claim on a model is referred to as the “mother agency”, and is usually paid a commission if a model is booked by the other agencies. It is essential that anyone seeking a career as a model begins by finding a suitable agency; that is, one that serves the kinds of a market in which he or she wishes to work.

Although tastes and looks change, the basic criteria for potential models at present are lean physical proportions; prospective models should generally be tall. Models also generally start their career relatively early: exceptions are very rare. The less readily defined and yet equally important keys to success are a “look”: something that makes a model distinctive and appealing to clients. Many fashion models are not classically beautiful or handsome, and yet have qualities that place them in great demand.


The first stage in a model’s career after acceptance by an agency is the assembly of a portfolio of photographs. Primarily this will be assembled by “testing”, whereby photographers try out a new model. These pictures are used in the first instance to seek editorial work, which, although not well paid, offers the model the opportunity to experience working on shoots with top photographers and acts as the all-important showcase through which more financially rewarding bookings are gained. It is exceptionally rare for any model to appear in major shows, catalogues, or campaigns without a strong editorial portfolio.

Competition within the modelling business is intense, as the supply of models usually exceeds the demand, and clients have a glut of models from which to choose. Models are expected to attend shoots or to appear in any part of the world. Many spend much of the year travelling between assignments.

The fashion industry and modelling have been intermittently criticized for their depiction of a feminine “ideal”; there has also been concern that the influence of the image of some models, especially those who are very thin, may encourage anorexia nervosa in adolescent girls. Fashion modelling is, however, a business in which women invariably command much higher earnings than their male counterparts.

Until the 1960s fashion models were mostly anonymous. With the growth in popularity of the global fashion markets that accompanied the rise in importance of ready-to-wear clothing and licensing, so some models became more generally well known. In subsequent decades this phenomenon has grown with the popular appeal of fashion so that there has now emerged a group of “supermodels” at the top of the profession who are global celebrities and who command considerable fees. However, there are also successful models who simply take up a modelling career between school and university, or to fund their education, or who are successful for a period without any need or wish to continue modelling as a full-time career.

Contributed By:
Mark Curtis

More image of Korean actresses:



Indian Cinema


Indian Cinema, the historical development of cinema in India. Cinema is popularly acknowledged to have been introduced to India through the famed private screenings by the Lumière Brothers at the Watson’s Hotel, Bombay, on July 7, 1896. As elsewhere, however, a variety of “pre-cinema” technologies and art forms addressing the projected moving image have subsequently been named as important precedents: the “pat” painting mural tradition present in various forms all over the country, in which an interlocutor/storyteller usually illuminates images while interpreting them in music and words (for example, the Rajasthani Pabuji-no-pad, the Maharashtrian Chitrakathi, or the animated leather puppets of Andhra Pradesh). More directly related to film technology itself, the Patwardhan Brothers’ Shambarik Kharolika (Magic Lantern) from the late 19th century in Maharashtra is one of the best-known instances of partially animated glass plates projected onto a screen.


The film industry in India began in two very contrasting sectors. On the one hand, distribution agencies (such as Bourne & Shepherd, Calcutta, Clifton & Co., Bombay, or the Madras Photographic Stores) often extended their business in still photography to distribute both “animated photographs” and offer crews on hire to shoot tea parties, advertising films, and prominent stage plays. They were joined in this genre by films sponsored by the colonial state and the indigenous royalty, official cameramen accompanying royal entourages (a prominent event of this nature being the extensively filmed Grand Durbar of King George V in Delhi, 1911). On the other hand, the first Indian film-makers were mainly independent amateurs: Harishchandra Sakharam Bhatvadekar, who made several shorts (for example, Wrangler Mr R. P. Paranjpye’s Return to India, 1902), was one of the best known. He also extended his film-making into an independent tent-show distribution and thence into what was, in the first decade of the 20th century, one of India’s better-known businesses trading in camera equipment. Hiralal Sen started the Royal Bioscope Company (1899) on the edge of Calcutta’s thriving commercial stage, which was even then adapting to film with Amritlal Bose’s innovative programming at the Star Theatre.

The first film-maker in something like the contemporary sense was the man known as the “father of Indian Cinema”, Dhundiraj Govind (aka Dadasaheb) Phalke, who made his debut with Raja Harishchandra (1913; King Harishchandra) made in a domestic, cottage-industry studio. In 1918 he started the Hindustan Cinema Films Company at Nasik, with finance from Mayashankar Bhatt representing the first real instance of indigenous capital entering film production. Much of the early money for production, during the period 1915-1922, was an expansion of real-estate speculation entering theatre-building and thence distribution and production, notably in Punjab and western India.

In both Bombay and Calcutta, there arose several studios in the 1920s, financed by former exhibitors and theatre owners. Kohinoor Film Company (established in 1918) was the largest—financed and owned by Dwarkadas Sampat who had financed other similar enterprises, including one for S. N. Patankar, another important independent distributor. Kohinoor began with a major censorship controversy when its first significant production, the mythological Bhakta Vidur (1921; St Vidur), where the saint appears clad as Gandhi, was banned for political reasons. Kohinoor nevertheless recovered to make several mega-hits in the 1920s, including Gul-e-Bakavali (The Fairy and the Flower) and Kala Naag (Black Snake) both in 1924, and was followed by a number of successful studios in Bombay (Ranjit, Sharda, and Imperial—the last known for making India’s first sound film, Alam Ara, in 1931).


With the coming of sound, most of the existing studios either closed down or grew into larger units; Calcutta saw the coming together of talent from the silent studios—Indian Kinema, Barua Pics (especially its proprietor, one of India’s most celebrated directors, P. C. Barua), and British Dominion Films—into perhaps the foremost studio in India’s history: New Theatres (established in 1931). This studio, along with Prabhat Film Company in Pune (which had developed in 1929 from the film-making traditions of Kolhapur, pioneered by Baburao Painter’s Maratha historicals and Mahabharata mythological at the Maharashtra Film Company), Bombay Talkies (established in 1934), the Sagar Film Company (formed in 1930, an offshoot of Imperial Studio), and Wadia Movietone (established in 1933), were the leading lights of what has been defined as the “studio era”, the period after the coming of sound and before the start of World War II. The first southern Indian films in Tamil and Telugu were made either in Bombay or in Calcutta, but by the 1940s a flourishing studio infrastructure was established in Madras, Coimbatore, Salem, and Mysore-Bangalore.

This period of the “studio era” saw some of the biggest, most spectacular, and complex melodramas ever made in India. Many of their terms of reference were social reform literature, sagas of the nationalist movement, and “realistic” proscenium stage plays. Some of the leading figures in Indian cinema—V. Shantaram (Kunku, 1937; The Unexpected), Barua (Devdas, 1935), Debaki Bose (Vidyapati, 1937), B. N. Reddi (Swargaseema, 1945), and B. R. Panthulu (School Master, 1958)—were responsible for establishing during this period not just the “look” of Indian cinema as it is currently known, but the very terms of cultural modernity, ideologies of neo-traditionalism and of the urban middle class.


With the end of World War II (and thus, of the war economy: the lifting of rationing on the raw stock, and the entry of new financiers into the film industry) and the coming of Independence (1947), the Indian government introduced several nationalist state policies on the cinema. Underpinning many of these was a new post-Independence interpretation of national cinema, ascribing to the mainstream Indian cinema a specific cultural value of fostering an indigenous “national integration”. Indeed, large numbers of the population outside the Hindi belt found contact with the “national” language through the cinema, and the Hindi cinema norms themselves were duplicated in Bengali, Tamil, Telugu, and Malayalam language industries. However, the birth of an independent cinema movement in the 1950s—the best-known example being the Apu trilogy by Satyajit Ray: Pather Panchali (1955; Song of the Little Road); Aparajito (1956; The Unvanquished); Apur Sansar (1959; The World of Apu)—also led to several changes in the norm within the mainstream industry itself. Driven by an explicitly post-national disaffiliation from the sentiments of belonging and community, much of this cinema introduced the notion of heroism in the character of the outsider, the rebel, the one who renounces society, as seen, for example, in the work of Raj Kapoor (Shri 420, 1955; Mr 420), Guru Dutt (Pyaasa, 1957; Eternal Thirst), and Mehboob Khan (Mother India, 1957).

Sequences such as the corrupt land scheme in Shri 420, scripted by the radical playwright K. A. Abbas, the poet who is believed dead and posthumously praised in Pyaasa, or, most spectacularly, the mother who fertilizes the soil with her rebel son’s blood in Mother India are now legendary in India. Many such scenes have to be perceived in terms of the contradictions posed by “traditional” values and a booming industry of mass culture, the rise of independent speculators and financiers, and their most obvious commodity: the star system. Stars of the Hindi cinema such as Dilip Kumar, Nargis, and Dev Anand were part of a large group covering many regional languages: Uttam Kumar and Suchitra Sen in Bengali, Rajkumar and Kalpana in Kannada, Prem Nazir, Sathyan, and Madhu in Malayalam, and (future politicians and chief ministers) M. G. Ramachandran and N. T. Rama Rao in Tamil and Telugu respectively. These stars were only the most visible elements in a major technological and narrative standardization of the “All-India Film” idiom, premised on the songs and the audio industry, composers, and lyric writers in every language, and a system of visual and dialogue recording that have come to be characteristic of the “Indian” or “Masala” or “Bollywood” cinema.

In the late 1960s, with a series of agrarian, industrial, and other movements threatening the stability of the government of Indira Gandhi, came the “New Indian Cinema” movement. It was set in motion with a small, unsecured loan from the Film Finance Corporation (now the National Film Development Corporation) to Mrinal Sen, for making Bhuvan Shome (1969), his first film in Hindi. A contemporary of Ray, and Ritwik Ghatak, Sen was already an established director of serious cinema, but Bhuvan Shome, reaching a much larger audience, brought him national recognition.

The initiative for funding independent art films came from the government, and the movement, in turn, validated the substantial ancestry in the films of Ray, Mrinal Sen, and Ritwik Ghatak (Meghe Dhaka Tara, 1960; The Cloud-Capped Star), a maverick director whose genius went unrecognized in his lifetime. Ghatak’s legacy is evident in many of the films that pioneered the New Cinema movement: Mani Kaul’s Uski Roti (1969; Daily Bread), a shocking departure from the narrative and the familiar; Kumar Shahani’s radical Maya Darpan (1972; Mirror of Illusion), the Indian cinema’s first consistently formalist experiment; and Ketan Mehta’s folk-theatre-derived tale of social injustice, Bhavni Bhavai (1980; A Folk Tale). The films of this period were often politically strident, their locale mostly rural or small town India, as in the works of Shyam Benegal (Ankur, 1973; The Seedling), Girish Karnad (Kaadu, 1973; The Forest), Adoor Gopalakrishnan (Kodiyettam, 1977; The Ascent) and G. Aravindan (Uttarayanam, 1974; The Throne of Capricorn).


In 1975, Indian cinema saw its biggest ever hit, Sholay (Flames), the Ramesh Sippy film that consolidated an era named after the biggest star of the 1970s and 1980s, Amitabh Bachchan. Themes of kinship violence against state formations revitalized the melodrama into a new formula often addressing, or spinning off from, growing perceptions of state authoritarianism.

Bachchan’s best-known films, made with Manmohan Desai (Amar Akbar Anthony, 1977), Yash Chopra (Deewar, 1975; Wall), and Prakash Mehra (Zanjeer, 1973; Chains), introduced a new, specifically 1980s idiom, often summarized as the “vendetta movie”, in which the hero, or the heroine, becomes the dispenser of popular justice in flagrant disregard of the law. Some of the themes, and their derivatives in other languages—for example, Chiranjeevi’s Goonda (small-time gangster) films in Telugu, Mammootty’s police thrillers in Malayalam, or the recent Rajnikant hit Annamalai in Tamil, 1992—should be seen in the context of a turbulent society, torn between traditional family values and an individualist counter-culture that makes a virtue of anything from religious fundamentalism to political expediency.

There were three cinematic offshoots of these last two decades of the 20th century: the vibrant regional language cinema with directors such as Jahnu Barua (Banani, 1989; The Forest), Gautam Ghose (Paar, 1984; The Crossing), Rituparno Ghosh (Unishe April, 1994; 19th of April), Shaji Karun (Piravi, 1988; The Birth), Girish Kasaravalli (Tabarana Kathe, 1986; The Story of Tabara), Nirad Mahapatra (Maya Miriga, 1984; The Mirage), M. T. Vasudevan Nair (Kadavu, 1991; The Ferry), Aribam Syam Sharma (Ishanou, 1991; The Chosen One), Bhabendra Nath Saikia (Agnisnaan, 1985; Ordeal), Aparna Sen (Paroma, 1985), mostly viewed by specific language groups, and by the film festival audience; a strong strain of off-the-mainstream directors once dismissed as part of the “art cinema” circuit, but now watched by a growing urban audience: Shyam Benegal (Suraj ka Satwan Ghora, 1992; The Seventh Horse of the Sun), K. G. George (Adaminte Variyellu, 1984; Adam’s Rib), Prakash Jha (Damul, 1985; Slave till Death), Shekhar Kapoor (Bandit Queen, 1994), Balu Mahendra (Sandhya Ragam, 1991; The Evening Raga), Ketan Mehta, (Maya Memsaab, 1992; The Enchanting Illusion), Saeed Akhtar Mirza (Salim Langde Pe Mat Ro, 1989; Don’t Cry for Salim the Lame), Sudhir Mishra (Dharavi, 1991; Quicksand), Govind Nihalani (Ardh Satya, 1984; Half Truth), Amol Palekar (Thodasa Rumani Ho Jaye, 1990; Let’s Get Romantic), Jabbar Patel (Mukta, 1994; The Liberated Woman), Kundan Shah (Jaane Bhi Do Yaaron, 1983; Who Pays the Piper); and the thriving “formula films” of Bombay and Madras popular across the country.

Mythology, melodrama, romance and violence remained the mainstay of popular cinema, while the relationship between reality and the moving image became increasingly complex. Mainstream films depicted militant nationalism (Vidhu Vinod Chopra, 1942; A Love Story, 1994), or individual defiance towards religious fundamentalism, political hypocrisy, and social oppression (Mani Rathnam, Bombay, 1994; Sibi Malayil, Kireedam, 1989; The Crown). Meanwhile, the underworld made inroads in the film industry: drug money produced cinema. The locale moved from rural India to major cities of the world. The rich non-resident Indian, back home from Europe or the United States, played a major role in the narratives. Young love became more permissive on screen. An amalgam of popular musical sounds from the West and the East dominated popular cinema.

In a global sharing of talents, music director, and composer A. R. Rahman (his soundtrack for Bombay sold over 50 million copies) created the music for Bombay Dreams (2002), an Andrew Lloyd Webber production with Shekhar Kapur as associate producer. Rahman gained star status for his music in Roja (1992), directed by Mani Rathnam, an equally phenomenal success in all four South Indian languages and Hindi, whose films placed controversial social concerns within a framework of complex narrative, innovative camera technique, elaborate choreography, and thundering rhythm.

Contributed By:
Ashish Rajadhyaksha

Reviewed By:
Shampa Banerjee


Cinema, Early Development of


Cinema, Early Development of, the historical development of the medium known variously as cinema, motion pictures, film, or the movies.


As a result of the work of Etienne-Jules Marey and Eadweard Muybridge, many researchers in the late 19th century realized that films, as they are known today, were a practical possibility, but the first to design a fully successful apparatus was W. K. L. Dickson, working under the direction of Thomas Alva Edison. His fully developed camera, called the Kinetograph, was patented in 1891 and took a series of instantaneous photographs on standard Eastman Kodak photographic emulsion coated on to a transparent celluloid strip 35 mm wide. The results of this work were first shown in public in 1893, using the viewing apparatus also designed by Dickson, and called the Kinetoscope. This was contained within a large box, and only permitted the images to be viewed by one person at a time looking into it through a peephole, after starting the machine by inserting a coin. It was not a commercial success in this form, and left the way free for the Lumière brothers, Louis and Auguste, to perfect their apparatus, the Cinématographe. This was the first successful projector, as well as being the apparatus that took and printed the film beforehand. With their Cinématographe they gave the first show of projected pictures to an audience in Paris in December 1895.

After this date, the Edison company developed its own form of the projector, as did various other inventors. Some of these used different film widths and projection speeds, but after a few years the 35-mm wide Edison film and the 16-frames-per-second projection speed of the Lumière Cinématographe became standard. The other important American competitor was the American Mutoscope & Biograph Company, which used a new camera designed by Dickson after he left the Edison company.


The earliest films showed just one scene, which ran for about a minute, which was all that the standard lengths of the film (65 or 80 ft/around 20 or 25 m) produced by Eastman Kodak or other manufacturers allowed. From the beginning, some of these films showed specially staged and acted scenes, such as the Edison Barbershop Scene and the L’Arroseur Arrosé (A Trick on the Gardener) by the Lumières. However, the majority of early films were simple records of real-life scenes or stage acts. Some of these showed different views of related places and actions, and, although sold separately, they were probably joined together in succession by the showmen who bought them and projected them. It seems that the step forward from this, to joining a number of staged scenes together to tell a longer story, was taken in 1898 by the Robert Paul company in Britain with Come Along Do! In this, the action moves from a scene outside an art gallery to a scene inside by means of a cut. However, most of the early multi-shot films were made by Georges Méliès. In his films, well-known stories such as Cinderella (1899) were told in a series of disconnected scenes joined by dissolves (see Special Effects), as was done at the time with slides in a magic-lantern show. Méliès’s long story films with their trick effects were the most commercially successful of all in the first few years of cinema, and they led other film-makers towards producing longer films. However, Méliès’s films made no real contribution to the development of film construction as we know it.

The important figures in doing this were G. A. Smith and James Williamson, working independently in Brighton, East Sussex. Smith invented the basic technique of breaking a filmed scene down into a number of shots taken from different camera positions in his films Grandma’s Reading Glass (1900), As Seen Through a Telescope (1901), and The Little Doctors (1901). The first two of these introduced the view of things looked at through a magnifying glass and a telescope by one of the actors, by taking close shots inside a black circular mask. The Little Doctors used a close shot of a kitten being fed medicine that was cut into the middle of the shot showing the whole scene and constitutes the first such use of a “close-up” cut into a scene. By 1903 Smith was making a conscious effort to get some sort of continuity matching in the actor’s position across the cut. Smith then gave up ordinary film-making in 1903 to produce a system of colour cinematography called Kinemacolor that was quite successful up to World War I.

James Williamson developed the movement of action through a series of shots taken in various locations in his films Attack on a Chinese Mission Station, Stop Thief!, and Fire!, all made in 1901. In these films the leading character was shown running out of one shot, then there would be a cut to another scene set somewhere else, and the character would then run into the frame to continue the story. Méliès also used a similar technique on one occasion in the same year, but in this case, the shots were joined with a dissolve rather than a cut.

Other film-makers in Britain took up these techniques in 1903 and developed longer films by having characters pursued through more and more different scenes. These were referred to as chase films. Afterward, other less inventive filmmakers in France and the United States such as Edwin S. Porter copied these techniques in various films, such as The Great Train Robbery. In France, Charles Pathé built a large company by plowing back his profits to raise the production values of his films, and the film-makers he employed, led by Ferdinand Zecca, added extra polish to the continuity devices developed by the British. By building more studios and setting up multiple production teams, Pathé produced more films than any other firm in the world. A form of comedy unique to film began to develop, particularly at Pathé, by combining theatrical slapstick with the chase film.


In the early period, prints of films were sold outright by length, at so much per foot, through specialist film sales organizations to the showmen who exhibited them as items on a variety bill, or who traveled the countryside showing them intent theatres. There were no permanent theatres dedicated solely to showing films. This changed in 1905 because by that time there were enough films that were several minutes long to provide the programming for cinemas running full time. Beginning in the United States with the original Nickelodeon in Pittsburgh, there followed a worldwide boom in film exhibition and production. Up to this time the only countries to have a film industry were France, Britain, and the United States, but now film-makers went into regular production in Italy and Denmark, followed fairly closely by Germany, Sweden, and Russia.

In the United States, other film-making companies had been set up to compete with the Edison and Biograph companies, and the most important of these was Vitagraph. This company was modeled on Pathé, and as soon as Albert Edward Smith and James Stuart Blackton had established it with the films they directed themselves, it too moved over to a multiple-production unit structure with specialized departments for scripting, set construction, wardrobe, and so on. Smith and Blackton were responsible, along with the Pathé film-makers, for speeding up film narration and introducing the beginning of the technique of cross-cutting between scenes of parallel action.

As the number of nickelodeons in the United States increased into the thousands by 1908, the standard pattern of exhibition became the one-hour show costing 10 cents, made up of several films one reel long. A reel of film was towards 300 m (1,000 ft) long and ran for between 10 and 15 minutes.


Almost since the beginning of cinema, there had been litigation between the American companies over the basic patents for camera and projector mechanisms, and this was finally resolved in the formation in 1908 of a trust called the Motion Picture Patents Company (MPPC), which was intended to control totally the now immensely profitable film business. The models for this were the oil, steel, and railway trusts set up at the end of the 19th century in the United States. However, the members of the MPPC were unable to supply sufficient films to fulfill the demand, and new independent production and distribution companies were set up that had about half of the film business by 1912. At this point, the American government took legal action against the MPPC, which had really only succeeded in its aims for two years.


At the beginning of the nickelodeon period, various authors began to write about the cinema as a new art form, rather than as an interesting technical novelty. In France, in 1908, a new company called Film d’Art began production with L’Assassinat du Duc de Guise (The Assassination of the Duc de Guise), under a programme using artistically recognized writers, musicians, and actors, with a special theory about how films should be acted. This impressed film-makers even in the United States and eventually led to the creation by the film industry of a special category of films called art films. (This description is still used today for films of higher artistic intent, made on lower budgets for the minority audience that will appreciate them.) In the nickelodeon period, films from Italy, as well as France, showed an influence from the middlebrow or Salon Art of the time, particularly in set design and staging.


As part of its expansion, the Biograph company engaged an actor and playwright called D. W. Griffith to direct its films. Griffith was the first film-maker to appreciate fully and apply the existing techniques of film construction to dramatic storytelling. In particular, he used and invented acting gesture in a powerful way, and he also got more shots into a given length of the film than others, both by moving his actors from space to space and also by developing the technique of cross-cutting between parallel actions into a powerful motor for screen drama. For a couple of years, he directed all Biograph films, a total of 30 minutes of finished film per week. Eventually, some of his actors shared some of the directing load, including Mack Sennett, who took over the comedies at Biograph. The other major American film companies followed behind Griffith with respect to the increase in the number of shots in films, but an increase in camera closeness to the actor developed simultaneously in films from Biograph and Vitagraph. The latter company also made a conscious attempt at greater realism in its films, including the acting, and pushed this rather further than Griffith. However, Griffith was the leader in using changes in the closeness of the camera to emphasize the drama at appropriate points, and also in employing changes in the speed of cutting the film for the same purpose.

In 1907 the Selig company of Chicago moved some of its production to California, and it was gradually followed by most of the others, who appreciated the advantages of the new locations and the long hours of bright sunlight there.

It was in Westerns shot in California in 1912 that some of the final major developments in film construction took place. One of these was the use of reverse-angle shots, that is, shots were taken in the opposite direction to the preceding shot. Although this sort of shot had appeared before on rare occasions, it was not used as a standard method. Shooting a continuous scene with reverse-angle shots has a number of advantages, including presenting the actor’s facial expressions more forcefully, enabling smoother continuity as actors move about the set, and drawing the audience of the film more fully into the action.

Another development allied with this was the use of point-of-view (POV) shots, which meant taking a shot within a scene from the position of one of the actors seen in the preceding (or following) shot. Although point-of-view shots, with a black mask around them simulating the view through an optical instrument or a keyhole, had been used when appropriate since the beginning of the century, the idea of showing what a character in a film sees in an ordinary shot without masking had never been standard practice until this time. POV shots may also be reverse-angle shots and vice versa, but not necessarily so. A number of little-known filmmakers developed these new techniques, but it is certain that D. W. Griffith was not responsible, and in fact, he never really used POV shots after they were developed by others. The acting in scenes within D. W. Griffith’s films continued to be organized towards the front, following the theatrical manner.

As American filmmakers cut their films up into more and more shots, they had to improve the continuity between these shots. The idea of cutting on action was refined, and the use of reverse-angle shots helped as well. Also, as American films were shot closer and closer to the actors, the acting in them became even more naturalistic and less restrained. The final feature of standard silent cinema was the increasing use of inter-titles, which represented what the actors were saying within the film scene. By 1914 these dialogue titles were being cut into the film at the instant the actors spoke the words, and so the effect was essentially the same as a stage play. Using all these devices, American films brought the audience right up and into the action, and by also leaving out the boring sections with their faster cutting, they proved irresistible to audiences worldwide. By the end of 1914, American films took first place at the box office in Europe and were taking over from the previously dominant French cinema even in France. The onset of World War I only clinched the inevitable world domination of American cinema.


The Griffith style of filming was applied to comedy by Mack Sennett and combined with the French comedy approach to producing something purely American. The comedy effect was intensified by speeding up the action at the climaxes by turning the camera at a slower rate when the shots were filmed (undercranking). In Europe, the most popular comics had been music-hall clowns such as Boireau (André Deed), also known as Cretinetti and Foolshead, but from 1909 comedians who created a developed character in a more naturalistic style had begun to appear, led by Max Linder. Charlie Chaplin followed the approach of Max Linder within a Sennett-type framework.


The new developments described above were limited to the United States before World War I, but European film-makers led the way towards longer films lasting several reels. The most notable of these were the Italian films dealing with subjects from Greek and Roman antiquity, such as La Caduta di Troia (1910; The Fall of Troy), and Cabiria (1914), both made by Giovanni Pastrone.

In French and Scandinavian cinema there were also long films made on modern subjects, and although the American system of film exhibition discouraged it, film-makers there joined in a year or two later. This only really began in 1913, with films such as the sensational Traffic in Souls, dealing with the entrapment of girls into prostitution in New York.

As films several reels long became common in the United States, scriptwriting became more important, and here the tradition of the well-made play, as refined in the American theatre from European models, was taken over into the cinema. A basic feature of the well-made play was a well-developed causality in the plot, which ideally had two simultaneous tasks for the hero—to overcome a challenge and to get the girl as well. Also, the script should alternate action, comedy, drama, and romance from scene to scene throughout the screenplay, and indeed even within the individual scenes if possible. These features were well understood by the people that had come from the theatre, such as D. W. Griffith, Cecil B. DeMille, and Mary Pickford. One of Griffith’s first attempts at a real feature-length film was The Avenging Conscience (1914), which developed the use of Symbolism in the film, and made use of giant close-ups of objects to convey the thoughts and emotions of the characters.

Film-makers in many countries took up these ideas, and the war years produced many films that utilized Symbolism, allegories, and parables. As part of this first explosion of interest in the possibilities of a truly film art, other technical devices were developed by American film-makers. One of these was the flashback, in the sense of an episode from the past inserted into the middle of a film when one of the characters remembers it. Around 1914 American film-makers tried out multiple flashbacks and even flashbacks inside flashbacks. Moving the camera about while filming (the tracking shot) also became popular for a few years from 1914. Although first developed in the United States, the tracking shot was particularly associated with the Italian epic Cabiria.

After his The Birth of a Nation proved an immense commercial success in 1915, Griffith used the profits to construct a grandiose four-hour film around the subject of Intolerance (1916), with four different stories told simultaneously by cross-cutting between them. There was also cross-cutting between different strands of action of the four stories themselves so that in the latter part of the film there are long strings of shots that have no immediate connection with one another. This proved too much for the general public, as did other similar films, and after this American cinema retreated to a more straightforward form of story presentation.

During the years from 1915 to 1925, the final polish was put on all the features of standard film construction by the brighter young directors who had come into the industry such as Frank Borzage and Marshall Neilan, and some of the older directors such as DeMille picked up these techniques as well. Others who could not drop out of the business. By the late 1920s the basic techniques of film construction, as the standard method of telling dramatic stories, were complete, and they came to be used everywhere, right up to the present. This is often referred to as “classical cinema”.


During World War I the film industries in the various European countries were badly damaged by the war effort’s demand for manpower and materials, and also by the loss of markets. This was particularly true in Danish and Russian cinema, but all the other countries except Italy were also affected. Italian producers took advantage of their privileged position to make more and more grandiose films, and much effort was expended on a peculiar genre of diva films. In these dramas of unhappy love, the female star suffered and struck endless anguished Art Nouveau poses surrounded by male admirers and luxury. Because the Italians had at that time still not adopted the new American style of film construction, their films were unsaleable in the major markets, and their industry was ruined. Production in Italy fell away to a few dozen films by 1925, and Italian cinema did not recover until the sound period. Towards the end of the war, new talents and ideas found their way into German cinema and French cinema.

See also African Cinema; Art Cinema; Australian Cinema; Chinese Cinema; Eastern European Cinema; Indian Cinema; Irish Cinema; Japanese Cinema; Latin American Cinema; Neo-Realism; New Zealand Cinema; Spanish Cinema.

Contributed By:
Barry Salt