Saturday, August 31, 2019

Charlie Wilson’s War

For the second portion of my summer assignment I watched Charlie Wilson’s War. Throughout the movie the various governments affected many of the individuals. The movie is set in the cold war where the United states would not openly oppose the USSR. When the United States took action against the Soviets it had to be done covertly. Charlie Wilson was a U. S. Congressman who decided to help the Afghans in there battle against the Soviets. During the movie Charlie tells of how he originally became interested in politics, When he was a boy his twisted neighbor Charles Hazard, an elected city official, poisoned his dog Teddy. To get back at Mr. Hazard, Charlie went out and got a farming drivers permit and drove voters out to the polls, saying before they went to vote, â€Å"Not to influence your vote, but Charles Hazard poisoned my dog. † It was at this moment that Charlie decided that he wanted to be involved in the government, because through the democratic process he was able to get what he wanted. When faced with the conundrum of how to transport all of the weapons into Afghanistan Charlie asked the President of Pakistan to get involved. The Pakistani president would have not have had to do this step of being a â€Å"middle-man† if the US would have declared war with the USSR, but because of the necessity of covert operations the president of Pakistan had to become involved and risk his country to help the United States and Afghanistan. For Charlie to convince the chairman of the committee overseeing covert operations in the area to vote in his favor he said that he must get a blind pakistani girl out of jail, the girl was put in there because she was raped, and there were not enough witnesses to prove her innocence. The chairman said that if the President of Pakistan released her then he would vote in Charlie’s favor. Because of Pakistan’s Policies the girl was put in jail because she was not able to provide a description of her attacker. Also many Sheep herds were killed by the Soviets Helicopter Pilots.

Friday, August 30, 2019

Bank of Credit and Commerce International

The Bank of Credit and Commerce International (BCCI) was the world’s largest Islamic bank which involved in many criminals activities due to which the bank was eventually shut down and was perceived as the worst bank operated ever in the history of world banking sectors.Introduction The Bank of Credit and Commerce International (BCCI) was established by a Pakistani banker Agha Hasan Abedi in 1972. BCCI was registered in Luxembourg. It reached at height within a decade. It has more than 400 branches which were operated in 78 countries.It was among the world's largest private bank ranking 7th position due to its excessive assets of US$20 billion (History Commons). BCCI's Involvement in Criminal Activities BCCI became the target in 1980 during which an undercover operation (extending two-years) was held through the Customer Service of United Sates. A fake wedding was concluded through the operation which was attended by the drug dealers and BCCI officers across the world. These v iolators built a working relationship and personal friendship with the Special Agent (undercover) Robert Mazur.The key bank officers were put in trial in Tampa for six months after which, they were seriously charged and imprisoned for lengthy period. Many other crimes were revealed during cooperation between bank officers and law enforcement authorities (American Patriot Friends Network). Major Tips of BCCI's Criminal Activities A Congressman Charles Schumer conducted a Congressional investigation between 1979 and 1991 which revealed around 700 tips regarding criminal activities of BCCI.The following are the major tips which were received by the federal law enforcement commissions and the same visualized the BCCI involvement in criminal activities: 1. Promotion of political unrest in Pakistan. 2. Financial supporting to terrorist groups. 3. Smuggling weapons to numerous countries such as Iran, Libya and Syria. 4. Organized criminal linking in Italy and United States. The above are o nly the major tips but indeed, around 700 tips were revealed through the Congressional investigation (History Commons). CIA's Illegal Involvement in BCCI BankFor the last ten years, CIA had been paying to its 500 British Informants through BCCI Bank. The information of illegal overseas business deals and sales of British arms were reported to the CIA by some informants. The spectrum of CIA informants involved in criminal activities include: 1. 124 people in politics or government 2. 53 in banking, industry and commerce 3. 24 scientists 4. 90 in the media 5. 75 in academia 6. 124 in communications Although, individuals were not specifically named but few of them were in senior positions (American Patriot Friends Network).Closure of BCCI Bank The Bank of England shut down the Bank of Credit and Commerce International (BCCI) on July 5, 1991 and the regulators shut down BCCI offices in dozens of countries and seize about $2 billion of the bank’s $20 billion in assets. Many milita nts including Bin Laden had operated accounts in BCCI. The President of UAE, Sheikh Zayed bin Sultan, owned 77% of BCCI shares and approx 1. 4 million accounts were operated by people who had likely lost their monies upon closure of the bank (History Commons).Conclusion Thus, the Bank of Credit and Commerce International (BCCI) financially supported many militant organizations through the money which was generated through illegal activities including illicit drug trafficking and arms trafficking, therefore, it is right to claim that BCCI worked viciously, violently and criminally in favor of deadly terrorist service across the world due to which the BCCI deserved to be shut down and the criminals operating BCCI will never ‘Rest-In-Peace' (Ambit ERisk).References Ambit ERisk, Case Study: Bank of Credit and Commerce International, Retrieved on May 4, 2010 from http://www. erisk. com/learning/CaseStudies/BankofCreditandCommerceIn. asp American Patriot Friends Network, Bank of Cre dit and Commerce International, Retrieved on May 4, 2010 from http://www. apfn. org/apfn/BCCI. htm History Commons, Bank of Credit & Commerce International, Retrieved on May 4, 2010 from http://www. historycommons. org/entity. jsp? entity=bank_of_credit_and_commerce_international

Thursday, August 29, 2019

Mabel McKay Weaving the Dream Essay Example | Topics and Well Written Essays - 500 words

Mabel McKay Weaving the Dream - Essay Example Mabel was a very quiet and observant child. She always stared about the things. She was so weak that Sarah had to told to the people who said the girl looked like she was starving to death. When she began to speak a strange thing happened that she started restless nights and she began to say things those were not supposed to be known by her. She talked about her step - mother the big lady. Every one was surprised that how could she has known anything about that as she was an infant child then. Sarah considered her special child with unique qualities. When the Mabel was at the age of twelve years, her mother Daisy returned back and tried to handover her to an old Colusa man. Sarah had to shift Mabel McKay to Mrs. Spencer’s house who was a very nice lady and mostly hired the Indian to cut the grapes at each fall. There were many ways by which the local inhabitants following the indigenous practices and views. They expressed their views and followed traditional customs in different gathering and festivals. For example when Sarah went to see her sister Belle, both â€Å"sat on the floor, in the old style, even though Belle had a new table with four perfectly comfortable wooden chairs. And when they got sleepy, they camped right there, folding up their shawls for a pillows† (Sarris, 16). The life in valley till was very simple yet few things were changed. There were Roads every where. where. Also the large oak tree along the Creek looked dry and along the water where sweet clover grew year round, there was nothing but, dusty earth, and cow dung (Sarris, 17). Following were the common ways by which the people follow the indigenous practices and views.

Wednesday, August 28, 2019

Female Genital Mutilation Essay Example | Topics and Well Written Essays - 750 words

Female Genital Mutilation - Essay Example First, it will be good to note the most prevalent villages in the area that practice FGM. This information may be obtained from schools that is liaising with teachers to ask the students if they are practicing FGM back at home or from reports in hospital or chiefs office. Once the targeted area are identified, a committee is made to help come up with strategies of sensitizing people on the harm it causes to our women and urge members to be on   the lookout.Meetings are then held in specific villages as teachers are also told to educate students in schools of the dangers to the girl child especially on their health and economic impacts as described above. To help stop communities or families from practicing, rules are set that impede them but if one is caught, they have to adhere to the consequences.It is of importance to educate professionals in schools so that they help sensitize students on the dangers of FGM, this will help change the incoming generation, since they are in prepa ration, in health sectors it will be of importance because most people seek health services and in the part of health education, the health personnel will be able to educate the lucky few. This in general, will create professional support for women trying hard to educate the public on why FGM is a violation of human rights and has no medical value and in support, they will be able to minimize such acts if not eradicating. This has made people especially children grow knowing that it is one of their rights.

Tuesday, August 27, 2019

GDP as a measure of welfare Research Paper Example | Topics and Well Written Essays - 500 words

GDP as a measure of welfare - Research Paper Example Russia for instance has been having an education system that determines its future economic development as it specializes in providing learners with basics in various fields like technology and trade. This therefore shows particularly that GDP can be the only factor to be used to gauge the performance of an economy in a country. Education at large is an expenditure to the economy therefore it should be counted during the measuring of Russsia’s standard of living. The life expectancy of both Russia and Kuwait is high and it is depends heavily on the economy due to expenses used in these two states to ensure that the life expectancy is very high as it looks at spending in good medical facilities and the higher the life expectancy the higher the future economic growth. This two states have tried to encourage for an equal balance in terms of income among the individuals. They do this by setting standards by a certain level to suit every individual there during the calculation of the living standards of the country and the measures included. Hence the GDP per capita have a close relationship with these other alternative measures in determining the living standards. These alternative factors should at all cost be included in the measure of living standards across these two nations economically. Another important measure of the living standard is the taxation rate in the economy of the involved countries. This simply adds to the revenue income of a nation at total budget which qualifies it to be used as a determinant in measuring the standard of living since it entails income of the authority government which is the overall income to the country. The large the tax rate the higher the income revenue to the government. The six countries as shown with their respective tax rate. Kuwait belongs to the same category as Russia because

Monday, August 26, 2019

Sex Education in America Article Example | Topics and Well Written Essays - 1000 words

Sex Education in America - Article Example A better option would indeed be to sit them down and explain to them in moral or practical terms what they need to know about sex. Knowing the stages in the 28 days of ovulation did not benefit that 16 year old pregnant girl in the clinic, but perhaps if that girl who had remarked over hoping child-birth did not hurt as much as sex, had been told how to say no or avoid doing something she clearly never enjoyed doing, she wouldn't be in the position that she currently was. Teachers should sit students down and explain to them the social aspects of teenage pregnancies, explain the possible 'solutions' one relies on when such a situation arises, and explain how none of them are ever really a solution. Furthermore, rather than scaring them away from sex using pregnancy as a tool, students should be educated on sex itself, in practical terms rather than scientific ones. Sex is not a tool to keep someone interested in you, nor is it something to increase intimacy. Rather it is something us ed to express intimacy, and until students know how to do that, it would be like speaking French without actually knowing how to. Furthermore, as that girl in the high school told you Ms. Quindlen, most girls will succumb to intercourse under pressure from their peers or their boyfriends. Perhaps girls should also be taught that there is no need to feel the pressure to keep a friend or a boyfriend who will judge them on their willingness to have sex. Yet we find that none of these issues are ever actually discussed in sex ed classes. Nor is student input ever taken, so that their confusions or queries can be cleared out. Indeed it is possible that, as you, the future or aftermath is such a vague distant matter that the students aren't even aware of their confusion in reference to it. If all that matters is the build-up to the act, they would not find themselves focusing on the ifs, buts, whys and hows of the matter. Perhaps this is because parents are not comfortable with the idea o f sex being taught to their children in such an accepting matter, because idealistic or not, many parents do not want to accept that the idea is relevant to their child. Nonetheless, as their teachers and parents, it is our job to protect our children and educate them on the matter and I do feel that sex education needs to be reconsidered in the way that it is being taught. As for the matter brought up by Ms. Austin, I also completely agree with what you had to say. Indeed as you said, after the revolution of the sixties and the current changing trends, many girls today feel that as they are career-oriented women and not the basic definition of a housewife, home economics and learning how to run a home is not relevant to them. Men on the other hand feel that it's the woman's job to handle a house and they too feel it is not relevant to them. This, in my view, is the basic reason for the decreasing popularity of home economics, and perhaps the rising rate of broken or mismanaged hous eholds. Home economics is essential for anyone hoping to have some form of a household or family, whether it is as a full-time housewife or husband, or as a part-time housewife.

Sunday, August 25, 2019

Article Critiques on Human Resource Mgmt Case Study

Article Critiques on Human Resource Mgmt - Case Study Example Although Lisa was not an HR person. She had somehow developed skills that aided her in her new career. Because of the fact that Lisa was a people's person and more on the serving side of the table she could and was able to create an unusual bridge between the administration and Microsoft employees. She was able to grasp the actual need of the time. She realized how important it was to treat your employees with confidence attention and trust in order to achieve organizational goals. Also when we read the article we realize that it is very vital for the administration and the employees of a company to be on the same level of zeal and commitment, other wise we should not expect our organizations to work to succeed as then we would have a situation wherein the employees would have no goals but just be working robotically. And if employees don't set and achieve targets of their own, how would they be ever able to do so for the organization. More and more companies in the U.S are now shifting from their hyper active work mode towards more on the side of providing their employees with tips on how to sleep well at nights. Also they are providing for their employees 'nap needs 'at work. Arshad Choudhry has come up with this new invention of Metro Naps. He realized that his colleagues were going into the washrooms to take naps during work hours. This does not sound UN familiar at all. But, I believe that truly the actual potential of performance that an individual has can go down drastically if he/she has not been able to sleep well. The companies in the U.S are brave enough to realize and accept that this is the need of the Hour. And along with that provide for their employees these Metro Naps to let them break for Naps during work hours. Without a doubt if research is done, it will prove that employees will be working harder. But then, we cannot ignore the fact that their might be some individuals who would really think it embarrassing to sit on one of those MetroNaps. They might perceive it for individuals who are not able to cope up with work stress, work load etc. But if we look at the other side, employees might just start sleeping well at nights and not end u p on the metro nap at all. There is no denying that for a few fresh years now, these metro naps will not be considered too Good to be on. As in people might just want to stay away from it because they would want to prove themselves competent enough to take it. But as time will pass by it will become a norm and so every employee one day will just get up and sit on that Metro Nap and ease off before he/she restarts. ARTICLE # 3 Abstract : This article is a focus on how globalization is changing the organization trends in the world of today. How this globalization is effecting all organizations around the globe. Also that in today's world we are more likely to look at organizations which believe in running with time, and so invest heavily with keeping their employees mobile and always connected through the latest technology with the business world. Analysis: The world of today becoming more of a global village than ever. To keep up with the fast changing trends

Saturday, August 24, 2019

Case study(memo detailing) Study Example | Topics and Well Written Essays - 500 words

(memo detailing) - Case Study Example It already has an approval from the legislature but the company is torn on what creative strategy to use to implement the solution. Putting up a nuclear reactor is controversial since it will require increasing the present charges which the customers are already complaining and that there are sectors which are against nuclear plants. As it is however, energy cost will remain high and will continue to increase since power has to be imported from California and other states. Building a reactor will bring down the cost and will make Arizona self-sufficient in energy making supply more stable thus contributing to a lowered cost. Before this proposal becomes a reality, there must be a market acceptance first about the building of a nuclear reactor. This is necessary because the present market base will be paying part of the cost of the reactor since they will be the one who will benefit from it. In addition, it will also be necessary to communicate that the nuclear reactor is safe to allay fears about its presence. In order mitigate market resistance on the proposal of building a nuclear reactor and to facilitate market acceptance of the necessary cost associated in building it, the company must launch an information campaign about the benefits of putting up the reactor. The message must address the consumer base concerns which are cost and safety. To help consumers accept the necessary price hike and to understand why it is necessary, the computation of the energy cost that will be saved once the nuclear reactor is operational must be communicated. This will enable consumers to understand that they will save money on the long run once the reactor is built and that the price increase is temporary and necessary. The safety features of the reactor must also be included in the campaign to avoid protests in the construction of the nuclear reactor. The information campaign must use multimedia to reach the various sectors of society in Phoenix, Arizona.

Terrorism Essay Example | Topics and Well Written Essays - 1750 words

Terrorism - Essay Example Though terrorism has been part of various societies of the globe for the last several centuries, yet the contemporary world has become the most despondent victim of such obnoxious assaults and threats. The horrible victimization of terrorism of the modern world is partly due to the invention of latest dreadful and destructive weapons, techniques and strategies, which has taken the entire world into the awkward clutches of terrorism. Additionally, fast increasing gulf of hatred and detestation between the cultures, faiths and civilizations is also inviting violent clashes and conflicts on the very face of the earth, and the political authorities and governments appear to be helpless in combating with this curse even. Consequently, collective measures are being introduced on the concrete foundations of multicultural and inter-faith co-operation to defeat and crush the widespread terrorist nuisance with collective efforts. The theorists, intellectuals and philosophers blame social injus tices and inequalities as the root-cause behind the expansion of terrorism in the world. They cite Marxist perspective that declares conflict between haves and haves-not as the by-product of chaos, confusion and anarchical state of affairs in human societies. The theorists are of the opinion that rejection of granting opportunities, resources and privileges to the developing countries is creating frustration in the minds of the masses, which always results in the form of violent reaction to the injustices and inequalities observed and promoted by the elite stratum of society on the one hand, and the powerful states of the world on the other. Hence, it is social inequalities that give birth to violent struggle against exploitation. Marx lauded the basic premise that the labor was the source of all wealth, and the profit of the capitalist was based on the exploitation of the laborers. â€Å"The capitalists performed the rather simple trick of paying the workers less than they deserve d, because they received less pay than the value of what they actually produced in a work period.† (Ritzer & Goodman, 2003:22) The modern terrorism is also the part of the same ideology created and implied out of sheer frustration and injustices. The present paper aims to identify the problem of terrorism in the light of the ideology, claimed and presented by various terrorist organizations, where these groups try to justify their actions and violent attacks against their opponent forces and groups to set the haves-not free from the exploitation of capitalism and imperialism. The groups under analysis including Baader-Meinhof of Germany, the Liberation Tigers of Tamil Elam in Sri Lanka, and Iranian state-sponsored terrorism reveal one and the same motif, which has been analyzed in the following lines 1. GERMANY A. Background and Facts related to Creation of RAF: World War I had drawn a clear and indelible boundary line between the nations on the basis of their economic positio n. Consequently, the conflict between the prosperous and poor states started widening to a great extent. The Germans had commenced World War II as revenge against the humiliating terms of Versailles Treaty of 1919, but the War culminated in favor of the capitalist societies, and thus added fuel to fire in the further demarcation between the rich and poor countries. Consequently, many extremist groups raised their heads as reaction to the growing exploitation prevailing in the imperialistic

Friday, August 23, 2019

Collapse of the Societies Essay Example | Topics and Well Written Essays - 1000 words

Collapse of the Societies - Essay Example He has clearly mentioned out that more than climatic variations; a society’s attitude towards addressing the environmental problems plays a key role in the demise of the society. It depends on the people and the government that how they consider it and what measures they take it in response to those environmental problems. He has stressed upon one factor that it depends on the society to choose its fate; to fail or to succeed. He has clearly mentioned that there are some societies that have sustained themselves even in the hard times such as Japan and on the other hand, there are some societies that have totally collapsed such as Somalia or Zimbabwe or those societies which are closer to collapse such as Nepal. According to him, we can learn from their downfall that what were the key factors that played a significant role in their collapse and then we can make an analysis of our selves that where are we standing today; near to collapse or we are already collapsed. Further Diam ond has identified that environmental problems are due to the irresponsible attitude of the humans. Humans have been the basic cause of societal collapse such as in case of Vikings; who happened to cause a lot of damage to their environment such as they caused soil erosion, they wilding cut the trees as a result deforestation occurred and the gradually the environmental problems and climatic changes occurred which caused inevitable damage to the overall society and hence a point came when those societies were swept off from the world. In other words we can say that, human interaction with its environment play an important role in determining the fate of a society. Here Jared has quoted the example of Montana which was a prosperous state with sound environmental conditions in USA. But today, the societal pattern of Montana has completely collapsed. It is one of the poorest states with unstable environmental conditions. The climate of Montana is getting warmer with no proper action pl an or government plan. However, we can say that today Montana’s invulnerable condition is becoming a threat to the overall United States. On the other hand, Jared sees that some of the societies collapse when they reach their peak such as Soviet Union. According to him Soviet Union faced a sudden decline in the time when USSR was in its greatest power. According to him, the main causes in such sudden collapse may be the mismatch between the resources available and their consumption or it could be a mismatch between economic potential and economic plans. There are some environmental factors that make some societies fragile than other societies. According to him there are two basic factors that cause a society to collapse such as conflict between the short term decision making interest of the elites and long term interest of the society as a whole. Whatever that is good in the interest of the elites for short term could turn out to be worst for the society in the long run. The American society could suffer from the same consequences from their governed bodies and business elites. It is imperative for the societies to understand all the problems and to take effective measures in addressing these problems. Addressing one problem would not be sufficient enough to tackle and overcome the issue of collapsing societies but it is highly important that all the problems should be highlighted and addressed on time to avoid collapse. Analysis

Thursday, August 22, 2019

Pursuit of Happiness Essay Example for Free

Pursuit of Happiness Essay â€Å"Keeping up with the Jones’s†, (Baumgardner Crothers, 2009) is a popular saying in America today, and not far from the truth, concerning the mentality and opinions concerning happiness and well-being. The Declaration of Independence also states the pursuit of happiness is an alienable right (Baumgardner Crothers, 2009). Society today lends opportunities to fulfill anyone’s desires, or dreams, yet as individual’s we are concerned about what other’s think around us. This thought process is evident throughout the American culture today and in history (Baumgardner Crothers, 2009). The concepts of culture and happiness are being compared as individualistic and collectivist (I-C) which provides the basis for over all well-being and what it means to be happy. Research compared two cultures Americans to East Asians and found subjective wellbeing (SWB) to be low in Japan where income trends are high, when compared to Americans. This concept was considered void because the Asian cultures did not measure happiness to self or individuality. Therefore the studies had to be modified. Later reviews revealed that Americans are encouraged to identify and express their unique sense of self as a way to influence and distinguish themselves from others, whereas or in contrast Asians are encouraged to identify and express attributes that behoove the community as a whole to develop self-critical and self-discipline which enables fitting in with others. This concept allows for improvement or enhances decision making that improves the social norm (Baumgardner Crothers, 2009). Because happiness and feeling good about oneself is a part of the American culture, American Parents rear their children to think for themselves and pursue things that make a child happy or feel good about them; this perspective is consistent with subjective well being (SWB), and that happiness is both subjective and individualized; it relates to the development of planning to pursue the things that both express who we are (traits and characteristics), with what separates us from others; uniqueness, and staying true yourself (Baumgardner Crothers, 009). A good example of this would be, a middle income family allowing their children to explore different activities, such as sports, art, or music to find what brings the individual joy, or discover new skills that will eventually lead them to influence others and themselves. It is a hard contrast in the Asian cultural for happiness carries less importance in their culture. Children are encouraged to restrain their emotions, and to fit in with others and take pride in team work (sympathetic relationships, or understanding others perspective and accepting it) â€Å"Children are expected to learn how to adjust themselves to others so as to enhance and maintain harmonious social relationships† (Baumgardner Crothers, 2009). This thinking also can lead to a critical mind set of one self and possibly others. East Asians do not put an emphasis on happiness, life satisfaction or the understanding and pursuit of positive emotions, but believe happiness is fleeting, and one should live a composed life from moment to moment in appreciation. Americans or individualistic cultures place emphasis on positive feelings that are directly related to achievement or accomplishments. It is believed good feelings promote self-esteem, independence, and happiness. A good example would be receiving a scholarship for earning a high GPA. Interestingly enough goal achievement is also important to collective culture or the Asian culture, when asked; research perspective was placed on SWB due to western influences (Baumgardner Crothers, 2009). However both cultures admitted to personal satisfaction, than to please others concerning the pursuit of goals.

Wednesday, August 21, 2019

Factors Affecting Labour Turnover Commerce Essay

Factors Affecting Labour Turnover Commerce Essay This proposal is on the factors that affect labour turnover of Life insurance Agents in Old Mutual Life Assurance Company Kenya. A Life insurance company relies on a stable Agency force to sell and service its Life insurance products to enable it make profit from the Life policy. The exit of an Agent affects the servicing of the policies sold with negative impact on Companys profitability and investable fund for the nations economic development. Therefore, the objective of this study is to identify the factors, find out how and to what extent they affect labour turnover of Agents in Old Mutual Life Assurance Company Kenya. It will also seek to find solution to the problem and make recommendations. This study will benefit the management and Agency Managers of the company, other Life Insurance companies, current and potential investors in Life insurance companies as well as government and its Agencies. The study will make use of descriptive research design which will involve field survey of targeted respondents of Old Mutual Life Assurance Company Kenya. The target population will be the regional managers, sales managers and the Agents at its branches in Nairobi numbering about 200. A sample of 15% will be taken using simple random sampling technique. The data will be collected by the use of questionnaire and analyzed using descriptive statistics which will include tables, charts, diagrams and frequency distribution measurements such as mean, mode and median. OPERATIONAL DEFINITION OF TERMS Life Insurance Life Assurance is an aspect of Financial Planning which provides for the payment of a capital sum to the dependants of a policy owner on his death or to the policy owner on survival to policy expiration, in consideration of the payment of a smaller, often regular, amount to the Life office Life Insurance Sales Agent Life insurance agents specialize in selling policies that pay beneficiaries when a policyholder dies. They also sell other varieties of Life insurance products such as annuities that promise a retirement income, Health insurance and short-term and long-term-disability insurance policies. Agents may specialize in any one of these products, or function as generalists, providing multiple products to a single customer. They earn commission and other benefits for their effort. TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES ABBREVIATIONS AND ACRONYMS LIMRA Life Insurance Marketing and Research Association AKI Association of Kenya Insurers IIAA- Independent insurance Agents of America COP Certificate of Proficiency OMLAC Old Mutual Life Assurance Company CHAPTER ONE: INTRODUCTION This chapter will focus on background of the study, statement of the problem, objectives of the study, the hypothesis or research questions, significance, scope and limitation of the study. 1.1 Background to the study Life Insurance is an aspect of Personal Financial Planning which enables somebody to provide for his future financial needs at old age and that of his or her dependants in the event of unforeseen circumstances. Such unforeseen circumstances are premature death, Total Permanent disability resulting from Accident or Critical illnesses which may reduce or terminate a persons income earning capacity. The risk of premature death is one of the major personal risks faced by most individuals. The financial consequences resulting from the death of a breadwinner before adequate resources have been established for dependents can be severe. Life insurance is a major source of financial protection against premature death. There are three main sources of life insurance protection which are individually purchased, Employer-sponsored and Government sponsored life insurance coverage. The dependable source is the individually purchased Life insurance protection because the other two may not be available to an individual. Life Assurance is a service premised on a promise to pay a certain amount of money in future in the event of the occurrence of a stated contingency which usually depends on the duration of human Life. Hence, the best form of selling this service is one on one personal selling through a Sales Person traditionally called an Agent. One major problem facing Life insurance Companies in selling their products and hence, profitability is the high rate of labour turnover of their Agents. A Life insurance company relies on a stable Agency force to sell its Life insurance products. These products are usually long tern going for a minimum of five years in duration. The profitability of a policy to the Life insurance Company depends on the consistent servicing of that policy by the Agent. When an Agent leaves an insurance Company when the policies he sold are still in their early years, such policies will no longer be serviced. Hence, the Company will lose in terms of future in-flow of investible funds, lost of commission that has been paid in advance to the Agent and payment of surrender values arising from lapsed policies. This situation threatens the survival of Life insurance companies and it has attracted the attention of some writers and researchers. According to Leverett et al (1977), the death of the independent Agency system as it exists today has been predicted for several years. Increased competition from newer sources, such as the entrance of Life insurance companies into the property-liability field, as well as traditional competition from the direct writers of insurance, tends to reinforce the foundation for such a prophesy. The attraction and retention of new agents into the independent agency system is vital to the continued successful existence of that system. A number of studies have indicated that the retention rate for agents recruited into the Life insurance industry is very low. According to one study, the two year and five year retention rates for 13 large life insurers in the United States were 39 and 13 percent respectively. Furthermore, the retention rate for smaller life insurers was found to be even less than for their larger counterparts. These figures are not totally unexpected given the lack or inadequacy of training and educational programs offered to new life insurance recruits. LIMRA (2009) points out that, it has been of great concern to many managers, the fact that only 5% of sales representatives who join the industry remain in the industry and become successful sales representatives. Out of the 5% only 2% become high achievers in the industry. Despite the fact that those on commissions earn more than majority of the salaried people, it has remained a very challenging field especially for the young people from college and university who would wish to earn good money easily and fast. Burand (2010) notes that over time, agents retention in the life insurance industry remains a perennial challenge for companies operating within the traditional career agency system. According to LIMRA (2010), 68% of agents leave companies within their first two years. Many managers presuppose that retention rates correspond with a companys effectiveness in building its sales and Organization in general. Company bottom lines would benefit substantially from increased retention rates. 1.1.1 Background to the Scope of Study Old Mutual Life Assurance Kenya belongs to an International long-term savings, protection and investment Group. The Group provides life assurance, asset management, banking and general insurance in 33 countries (Africa, Europe, the Americas and Asia). It has over 15 million customers and approximately 55 000 employees. The vision of the group is to be their customers most trusted partner passionate about helping them achieve their lifetime financial goals. The group was founded in 1845 and has expanded from their origins in South Africa in the last decade through organic growth and strategic acquisitions. It is listed in the UK, South Africa and three other African exchanges. Old Mutual Kenya (OMK) started doing business in Kenya in the late 1920s. The vision of the company is the same as its parent company but limited to East Africa. The mission statement of the company is as follows through understanding and meeting our customers needs, we will profitably expand our market for wealth accumulation and protection in Kenya. 1.1.2 Background to the Population Area and organizational Chart Old Mutual has 16 retail marketing outlets throughout Kenya including 4 in Nairobi. The retail marketing arm is under the jurisdiction of the Head of Sales who is at the head office. The head of sales is part of the executive management who reports on the activities of the sales force. The head of sales is assisted by head of channels who oversees the activities of the Branch managers in different locations. Under the Branch Manger are Sales managers who manage the Agents. 1.2 Problem Statement The Insurance industry has suffered astronomical losses resulting from high rate of labour turnover among Agents especially the new agents. The new agents are the sales representatives who have been with the company for less than four years. Annual report published by LIMRA international in 2004 pointed out that four year agents retention has not been able to move above 13 percents. This translates to 87 percent of the new agents in the insurance industry leaving their respective companies within the first four years of signing the contract. An agent in the insurance industry especially life insurance starts becoming profitable only after the third year of their contract in the company. This is because the initial years are characterized by huge training cost, initial allowances which are not tied to production and forward-earning commission system. This results in high expenses for the firm in the early years of recruiting an Agent with the hope of recouping the cost gradually from the future earnings of the Agent. This implies that most of the insurance companies have been incurring huge losses because of consistently poor retention rate of the new agents. Insurance agents retention has become a matter of concern as the Association of Kenya Insurers (AKI) highlighted in the 2011 report concerning developments of the tied agents in the insurance industry in Kenya. AKI report (2010) observed that lack of personal development of many Agents who join insurance industry is an issue that requires attention by the industry if the industry is to remain relevant in the country. Lack of personal development among the agents has been cited as an important factor that affects agents retention in the industry. A Life insurance company relies on a stable Agency force to sell its Life insurance products. These products are usually long tern going for a minimum of five years in duration. Agents are paid commission for any policy sold. The commission is structured in such a way that a substantial percentage up to 50% of the premium is paid in the first year and between 10% to 40% is paid in subsequent years up to the fifth year or sometimes end of the policy term. The profitability of a policy to the Life insurance Company depends on the consistent servicing of that policy by the Agent. If an Agent leaves an insurance Company when the policies he sold are still in their early years, such policies will no longer be serviced. Hence, the Company will lose in terms of future in-flow of investable funds, loss of commission paid in advance for future services of the Agent and an early lapse of such orphan policies. The economy also suffers because it will be starved of investable funds which aid the economic development of the nation. Old Mutual Life Assurance Kenya has experienced a drop in its number of Agents in the past years. While it had 500 Agents in 2010, they currently have about 200. This has also reflected in the revenue of the company from the individual life Insurance segment of the company. The premium income generated by the Agents for the past four years is represented in the following table. Table 1. Premium Income of Agents in Old Mutual Life Ass. Co. Kenya (2008 2011) Year Premium Income (Kshs 000) Difference Percentage Difference 2008 386,367 2009 378,056 (8,311) (2%) 2010 376,496 (1,560) (0.41%) 2011 349,429 (27,067) (7.18%) Source: OMLAC (2012) The graphical representation of the above situation is shown below. Figure 1. Premium Income of Agents in Old Mutual Life Ass. Co. Kenya (2008 2011) Source: OMLAC (2012) Life insurance premium from the sales Agent should increase in geometrical progression with positive cumulative effect on the revenue of the company. If the premium from new policies sold is added to the premium of existing policy holders, it should lead to increase in premium income from year to year. However, the reverse is the case in Old Mutual where premium income from Life insurance Agents has declined from Kshs 386 million in 2008 to Kshs 349 million in 2011. This represents a drop of 9.56% in premium income in 2011 compared to 2008. It is against this premises that this study will focus on factors affecting labour turnover of Life Insurance Agents in Old mutual Life Assurance Company Kenya. 1.3 Objectives of the study The objective of the study will include the following: 1.3.1 General Objective To investigate the factors that affect labour turnover of Life insurance Agents in the Life insurance industry in Kenya. 1.3.2 Specific Objectives To find out how remuneration affects the turnover of Life Insurance Agents of Old Mutual Life Assurance Company Kenya. To determine the effects of training on the turnover of Life Insurance Agents of Old Mutual Life Assurance Company Kenya. To investigate how physical work environment affect labour turnover of Life Insurance Agents of Old Mutual Life Assurance Company Kenya. To establish to what extent job satisfaction affects labour turnover of Life Insurance Agents of Old Mutual Life Assurance Company Kenya. To determine to what extent level of education affects labour turnover of Life Insurance Agents of Old Mutual Life Assurance Company Kenya. 1.4 The Research Questions The study will seek information to answer the following research questions: To what extend does remuneration affect turnover of Agents in Old Mutual Life Assurance Company Limited? To what extent does training affect turnover of Agents in Old Mutual Life Assurance Company Limited? How does physical work environment affect labour turnover of Agents in Old Mutual Life Assurance Company Limited? How does job satisfaction affect turnover of Agents in Old Mutual Life Assurance Company Limited? To what extent does level of education affect labour turnover of Agents in Old Mutual Life Assurance Company Limited? The Significance of the Study The findings from this study will benefit the organization and its stakeholders, the life insurance industry, government and other researchers in this field. The top management of Old Mutual Life Assurance Company Limited consisting of the Managing Director, the head of sales and head of channels who are likely to use the findings to understand the reasons behind labour turnover of Agents in the company. It will also help the Regional and Sales managers of Old Mutual Life Assurance Kenya to improve on their management techniques towards reducing labour turnover of Agents in their region and sales unit. The Sales Agents will also benefit from the study by using the recommendations to improve on their sales performance and create the personal willingness to stay with the company The findings of the study will also be of immense benefit to the government, especially the ministry of finance, and the commissioner of insurance who will use it to formulate policies that will improve retention of Agents in the Insurance industry. The stakeholders of Old Mutual Life Assurance Limited which include customers, investors and the public will also benefit from the study by understanding the factors that affect labour turnover of Agents in the company. Lastly, it will also benefit other researchers in this field who may use this report for further studies. 1.6 Scope of the Research Study The scope of this study will be found in the Life Insurance industry of Kenya. However, due to time and limited resources, the focus will be on Old Mutual Life Assurance Company Kenya. Since this study is on factors affecting labour turnover of Agents, the research will concentrate on the Agency force of the company which has about 200 Agents nationwide. For the same reasons above, the study will concentrate on the Agency force in Nairobi which is about 100 in number. The researcher will take sample from the research population. The period of study will be up to 30th September 2012. CHAPTER TWO: LITERATURE REVIEW 2.1 Introduction This chapter will critically analyze literature related to the study. This will include the issue of labor turnover in general and its effect, special attributes of Agents engage in selling services and Agent turnover in Life insurance industry. 2.2 Labor Turnover Labor turnover is the ratio of the number of employees that leave a company through attrition, dismissal, or resignation during a period to the number of employees on payroll during the same period. One of the 14 principles developed by Henri Fayol is stable labor turnover. He postulated that there should be stability of tenure of personnel in an organization. This is because a high labor turnover is harmful to the organization. Employee turnover refers to the rate at which employees leave jobs in a company and are replaced by new hires. A high employee turnover rate implies that a companys employees leave their jobs at a relatively high rate. Employee turnover rates can increase for a variety of reasons, and turnover includes both employees who quit their jobs and those who are asked to leave. Average employee turnover rates differ among industries; for example, in 2006, average turnover rates in the United States varied between around 15 percent annually for durable goods manufactu ring employees to as high as 56 percent for the restaurant and hospitality industry, according to Nobscot Corporation. According to a freelance writer, Shelley Frost, Employee turnover is a natural part of business in any industry. Excessive labour turnover decreases the overall efficiency of the company and comes with a high price tag. Understanding the effects of losing a high number of employees serves as a motivator to work toward reducing the labour turnover rate for higher profits and a more appealing work environment. The writer identified some cost associated with labour turnover as follows. Each employee who resigns costs the company money. All of the money invested into that employee through training, education and licensing walks out the door with the employee. When you hire a replacement, the company spends money on those same areas to prepare the new hire for the position. The company also pays to advertise the vacancy and may incur costs for drug testing, physicals and moving expenses. The company could pay 1/3 of the yearly salary of the new employee in costs. Labour turnover rates cost the company time in addition to money. Managers or human resources staffs spend time conducting exit interviews, advertising the job, recruiting candidates and interviewing. Supervisors and colleagues are often left to cover until a new employee is hired and begin working. The new employee may take several months to fully learn the job and achieve competency in the position. When the staff changes frequently, the employees who stay have a difficult time building a positive team dynamic. A group of employees learns to work well together, only to have one or more members leave. This leaves the staff in limbo until a new employee starts. The personality and work ethic of the new employee may vary significantly from the previous employee. Labour turnover can hurt overall morale of employees. The overall productivity of the workplace tends to decrease with high turnover. Since a new employee has a period of adjustment, he wont complete tasks as quickly as the person he replaces. Group projects that rely on the new team member may slow down, which affects experienced employees productivity levels. The loss of momentum when an employee resigns may also affect morale. A high turnover rate affects the continuity of service to clients and other employees. This is particularly difficult in an industry that relies heavily on relationships with clients. For example, a client who purchases products from a company on a regular basis may grow tired of getting a new salesperson or customer service contact every few months. Consistent relationships with clients help build a stronger loyalty to the company. The company is also better able to provide consistent, high-quality service with well-trained staff that doesnt change often. 2.2.3 Life Insurance Agent According to Independent insurance Agents of America (IIAA) (2009) an agent is a person who performs services for another person or an organization under an express or implied agreement and who is subject to the others control or right to control the manner and means of performing the services. The other person is called a principal. Rosenberg (2004) expresses the same opinion in different words by saying that, Insurance agents are sometimes referred to as insurance sales agents whose main obligation is to help clients choose insurance policies that suite their needs. There are two types of agents as classified by LIMRA (2007), some agents are captive or tied agents who mainly work for an insurance company and only sell that companies products, the other category of agents called independent or free lance Agents, are those who work for various insurance companies and sell insurance products of many insurance companies. The independent or free lance Agents are usually registered and licensed companies popularly referred to as brokers. 2.2.4 Qualification for becoming an Insurance agent Frankas (2010) says that, for Insurance sales agents job, most companies and independent agencies prefer to hire college graduates-especially those who have majored in business or economics, high school graduates are occasionally hired if they have proven sales ability or have been successful in other type of work. In fact, many entrants to insurance sales agent jobs transfer from other occupations. According to LIMRA (2007), College training may help agents grasp the technical aspects of insurance policies and fundamentals and procedures of selling insurance. As per the recommendation of AKI (Association of Kenya Insurers) regulations, every insurance agent must have done C.O.P (Certificate of proficiency in insurance) which is a proficiency certificate to transact insurance business in Kenya. Various employers are also placing greater emphasis on continuing professional education as the diversity of financial products sold by insurance agents increases. (Holt, 2010). An Insurance sales agent who shows ability and leadership may become a sales manager in a local office. As noted by U.S Bureau of Labor statistics (2010) a few advance to agency manager. However, many agents who have built up good clientele prefer to remain a sales agent. Some particularly in the property and casually field-establish their own independent agencies or brokerage firms. 2.2.5 Resourcing strategies George (1990) has pointed out that before selecting an agent there has to be a great process than just interview. He asserts that pre-hire assessment like testing and call center simulations have become essential tool in the industry. Tett (2000) of employment Technologies Corporation says that, for the insurance industry to succeed in improving agents retention there has to be simulation centers where the applicants would be given the opportunity to experience what they expect to find in the field and how sales are like. According to Ashly (2000) it is good to control; the flow of less-interested candidates before they reach the interview stage. Sometimes the applicant knows better than the hiring specialist that he or she is not the right sampling the job. Tom (2009), and Peter (1999) agree that accepting agents without checking their interests in the initial selection stage leads to poor retention of the agents. Nevertheless Srivivas (2003) warns against relying too heavily on the simulation. He says that simulation can be very effective for providing people with some exposure to what the job is likely to be. On the same note Banks (2010) disputes the other authors by pointing out that simulation are too artificial such that good candidates get left behind because they do poor simulations Wright (1992) asserts that simulation is only good to give a job presentation. 2.2.6 Agents Remuneration According Armstrong, (2006) Remuneration is the compensation an employee receives in return for his or her contribution to the organization. Luthans (1992) asserts that Remuneration occupies an important place in the life of an employee, his or her standard of living and status in society. Groholdt (2001) points out that, Motivation, loyalty, and productivity depend upon the remuneration he or she receives. For the employer too, employee remuneration is significant because of its contribution to the cost of production, besides, many battles (in the form of strikes and lock outs) are fought between the employer and the employees on issues relating to wages or bonus. Life insurance sales professionals typically earn all or most of their income through commission, which means that they get a certain percentage of every sale they make as well as residual income when clients continue to make payments. For this reason, an agent has the potential to earn much more than he would at an average hourly job. As with any other commission-based job, if an agent fails to perform, he will not be able to earn anything. Even if he does sell a substantial amount of insurance one month, he may not be able to sustain these sales numbers from month to month, and this may result in an unstable level of income. Cravens, Ingram, Loforge and Youngs, (1993), explored the relationships between compensation/control systems and performance and retention. Their results indicate that the type of control system, that is management control versus commission control, is correlated to several measures of success and agents retentions. They found out that sales performed and agents retention was more affected by commission control than by management control. 2.2.7 Agent Training Employee development is something that most people imagine as intrusive all-day group training sessions. Unfortunately, this dreaded approach to employee development is just the opposite of how employee development should occur and feel to employees. Employee development can manifest itself in many forms of training, evaluations, educational programs, and even feedback. If executed correctly, the effects of training on agent performance can often encourage growth within the worker and the organization itself. One of the larger aspects of developing Agents skills and abilities is the actual organizational focus on the Agent to become better, either as a person or as a contributor to the organization. According to Organizational Behavior by Robert Kreitner and Angelo Kiniki, (2009) its been shown that employees that receive regular, scheduled feedback, including training, along with an increase in expectations, actually have a higher level of worker output. Kreitner and Kiniki refer to this as the Pygmalion Effect. The hope is that agents who receive training in line with their individual or organizational goals will become more efficient in what they do. Organizations should look at the positive effects of training on agent performance, and consider agent development as a targeted investment into making the front line worker stronger. More importantly, development plans that include train-the-trainer (training that trains agents to become trainers of a skill) can provide exponential benefits to the organization. This training can be anything from how agents can do their own jobs better to these agents being groomed to replace their supervisor. In addition, agents who are invested as a trainer might be further inclined to stay with the organization, and possibly reduce agent turnover. Along with supporting the organization, agents might recognize that most types of agent development provide them benefits. Agent development programs that range from certifications to education reimbursement, to even basic sales skills training, have a certain cost to the organization that can easily be considered a benefit to the agent. Such awareness on the part of the agent can also lead to greater loyalty to the organization as well as enhanced job satisfaction. Training and education that can be added to the agents resume are big ticket items in terms of compensation plans, and should be treated as such. Beyond agent training and certification courses, evaluations and counseling sessions are another form of agent development. They provide performance feedback and allow agents to be aware of changes to both their work goals and the overall objectives of the organization. Agents who do not receive feedback on a regular basis usually end up feeling as though they might be forgotten by their supervisor, and this pattern may even lead to feelings of dissent among the Agency force. Going back to the Pygmalion Effect, agents who have consistent knowledge of their levels of performance, and who feel that their supervisors are placing expectations on them, generally perform better on an individual basis. Agents are required to attend meetings, seminars and programs to learn about new products and services, learn new selling skills and receive technical assistance in developing new accounts. Churchill, Ford, Hartley, and Walker, (1998) explored role variable, skill, motivation, personal factors, aptitude, and organizational/environmental factors in the retention of agents. The study found that, on average, single predictors or sales performance accounted for less than 4% of the variation in salesperson performance. Aptitude accounted for less than 2% skill levels slightly more than 7%, motivation accounted for 6.6% role perceptions was by far the best predictor, accounting for as much as 14% of the variation in performance. Personal variables (age, height, and sex, completion, and dressing) accounted for 2.6% while organization and environmental factors accounted for about 1%. They concluded that personal characteristics, while important, are not as important as the influencing factor s such as, training, company policies, skill levels, and motivation. 2.2.8 Physical Work Environment The physical work environment can be identified as a place or location where somebody works. Performance experts agree that the physical work environment has a significant impact upon employee performance and productivity. By physical work environment we mean the building structures, office layout, tools, furniture, space, noise level and surrounding of

Tuesday, August 20, 2019

The Influence Of The Media Politics Essay

The Influence Of The Media Politics Essay William Pearson. Voters may not be much influenced by the mass media but politicians certainly are. Discuss. The influence of the media is ever-present in British politics. With the decline of consensus, and rise in valence politics post-1970s, the influence of an overtly partisan press has become more marked, as has its both symbiotic and antagonistic relationship with political parties. The effect of the media on voters is typically examined using three key frameworks; reinforcement theory, agenda setting theory and direct effect theory. In Britain, both voters and politicians are directly and indirectly influenced by the mass media. However, politicians have been the group most affected by the rise in media coverage, to such a great extent that politicians are no longer free to air their honest opinions. This has had a detrimental effect on political discourse in Britain, and thus upon democracy. Furthermore, the British media is largely owned by a select group of individuals-medi a barons, which, when combined with the medias tendency to resist regulation, renders it largely unaccountable. Despite both voters and politicians being affected, the change in the behaviour of politicians and their parties, especially in candidate selection is the most notable difference in modern politics post-New Labour. I will first explain the theories of media influence and address their relevant to the modern British voter, and judge whether they are an accurate representation of media influence. Secondly, I will examine the effect of omnipresent media coverage upon politicians and political parties, and whether it has fundamentally and irrevocably changed politics. Thirdly I will evaluate the influence the new media environment has had upon the British political landscape. Finally, I will note the extent to which the media has the capacity to command political action, and evaluate whether this occurs. In order to assess media influence upon UK voters, it is necessary to understand the academic analysis behind the evaluation of media influence upon voting behaviour. Reinforcement theory suggests that the media has no great effect upon voting preference, and the primary role of the media is to reinforce the pre-existing belief of the reader, and is in part derived from the observation of Selective perception-wherein individuals internally filter out messages or information that conflicts with their political alignment. Furthermore, the theory suggests that the media is not responsible for dictating the national agenda, rather it reacts and changes in line with the perceived mood of the nation. Supporters of this theory suggest that in order for a media outlet to be economically viable it must have a group of readers whose views align with the editorial line, and should this line shift, then the core readership would disperse as would revenue. Therefore it is unlikely that the politi cal alignment of organisations will shift as it would theoretically damage their revenue and influence. The second theory is the agenda setting theory which is inclusive of the reinforcement theory, as it accepts that the media cannot change the way that people think on particular issues  [1]  . However it suggests that the news media is responsible for dictating the important issues of the day. For example, if the right wing press decided to focus their efforts upon presenting law and order as the prevailing issue of the day, the the Conservatives-a party traditionally considered strong in this area, would have the electoral advantage. This is a plausible theory as newspapers have discretion over what they publish, and the amount of coverage granted to each issue. The third theory is that of direct effects, which is considered dated by modern academics. It posits that the media can have a direct, visible and calculable upon voting behaviour. It suggests that many voters can be directed towards certain conclusions by means of selected reporting. Furthermore, it proposes that the press are capable of utilising value laden terminology  [2]  to shape the debate, and distort issues to the advantage of their political allies. This assumption of almost total naivete upon the part of the voter is largely held to be untrue, as there is little data to support the view that people switched parties as a result of reading a paper with a particular partisan bias  [3]  . While this theory has broadly fallen out of fashion, there remain demonstrable moment in which intensive media coverage of an issue has provoked such a public response that it has prompted government action, most notably the Dangerous dogs act 1991, which was rushed through parliamen t in response to press coverage of the pre-existing issue. This ill-conceived legislation was hastily enacted in response to public pressure. All these frameworks have merit, yet none are comprehensive. Due to the diversity of the British populace all of the theories have voters who they correspond to. Strongly aligned voters typically correlate with the conclusions of reinforcement theory, as their views are less prone to drastic changes, and they are likely to consume media which corresponds with the views. However reinforcement theory as a basis for evaluating voting behaviour has declined in merit proportionally to the decline of strong party loyalty in British politics. In contrast, less aligned voters are more inclined to change their views due to media coverage, and the agenda setting theory and direct effects theory pertains to these floating voters, of which there are an increasingly large number post-dealignment. Moreover, the field of explaining media influence on voting behaviour has proven difficult to measure due to a lack of empirical evidence, and the evidence which does exist is widely disputed, in part du e to the rapidly changing nature of the British electorate. One of the primary weaknesses presented by the data attempting to analyse media influence is that it has tended to focus very much on the short term  [4]  at the expense of long-term research. Any analysis of voting, and the medias influence upon it is further weakened by the inherent difficulties in determining cause and effect in voting behaviour. Despite the weaknesses in the above methods, its clear that the influence of the media upon the public, while significant, has been less pronounced than the medias direct influence upon politicians and Britains political climate. The influence of the media upon politicians is profound in modern Britain. The main change which the rise in media influence has engendered is the increasingly importance of candidates being marketable, rather than having significant political credibility. Politicians increasingly find themselves subject to, and evaluated upon opinion polling, which is itself held to be closely associated with media coverage, with positive coverage resulting in an upturn in the opinion polls  [5]  . The nature of the 24 hour news cycle shapes and dictates the political world, and there is increasing pressure upon politicians to be media savvy, and to never say anything which could be misconstrued. This effect has been amplified due to the rise of the internet blog and twitter sphere, in which politicians are analysed and judged on a minute by minute, second by second basis. Politicians are no longer given the opportunity to properly articulate their thought and opinions, due to time pressured and confrontational interviews. The primary consequence of this is that politicians increasingly are forced to rely up sound bites in order to feature on the nightly news, and to gain publicity. Unfortunately, this has led to a situation in which politicians are averse to giving longer, more honest and articulated answers due to the potential weakness these answers pose to their media coverage and thus, public image. Another consequence of the adversarial environmental cultivated by interviewers is that outspoken politicians, who are willing to be open about their views are typically cast as eccentric and unelectable, rather than praised for their honestly. Moreover, the nature of 24-hour news, with its constant need for new headlines and talking points has created a climate in which the executive is highly publicised at the expense of the legislature-as decisive action sells more papers than legislative discussions. Legislative discussions, and reasoned debate and deep analysis of iss ues are often labelled indecisive, or inconclusive, which stifles the proper functioning of the legislature. This further reinforces a system where the executive is almost entirely predominant over the legislature, a situation considered an aberration by most constitutional scholars. The rise of TV leadership debates has created an entirely new paradigm in British politics, with identikit leaders parroting sound bites to a disillusioned public. The 24 hour news cycle has contributed to the growth in the number of career politicians, and especially candidates with media backgrounds. This has led the number of politicians with real world experience declining, and the rise of the political class. The rise of TV debates and 24 hour rolling news has increasingly forced parties to ignore or disown prominent and distinguished members in response to the changing media environment. The most recent and notable example of this was the treatment of Sir Menzies Campbell both internally in the Liberal Democrats, and externally by the media. Widely considered a distinguished politician, with years of loyalty and eminent service to the House of Commons and the Liberal democrats, Menzies Campbell faced significant pressure to resign in part due to his age, and the negative effect this had upon public perception of his competence. Despite accusations of ageism from multiple parties, Campbells position proved untenable due to the supposed electoral weakness which his age represented. His was the notable cases in which the modern media were primarily focused upon irrelevant personal characteristics, rather than judging a politician upon their political views or achievements. The media has also had an effect not only upon individual politicians, but upon politics as a whole. Large media companies such as News Corp have, in recent years, acted as powerful pressure groups, who are exceedingly resistant to regulation or oversight. The Leveson inquiry is an apt example of this, as many media outlets have at times decried its recommendations for more press regulation and have spun the narrative of the inquirys recommendations being contrary to the freedom of the press, even in light of the phone hacking scandal. One of the most damaging results of the 24 hour news cycle, and constant evaluation of governmental performance is that it has encouraged short-termism in government spheres. A policy which doesnt deliver immediate results, but which would be better in the long term is unlikely to be approved, as without immediate results a policy could be spun as a failure by the opposition or the press. This move towards short-termism is another way in which legislat ive discussion, analysis and planning is stifled in favour of bold, decisive decision making, as this portrays the government in a more favourable light, potentially at the expense of the national interest. In summary, I would suggest that the media has fundamentally altered the nature of British politics. It has changed candidate selection, the political and social make-up of the house of commons, governmental behaviour, and with the growth of the internet, blogging and social media, this trend seems unlikely to be averted. While the effect which the media can have upon politicians is profound, the media can also have a significant impact upon legislation, and while it is rare, a media outcry can affect policy. The most notable case in which this has happened is the Dangerous Dogs Act 1991. It was enacted in response to sensationalist newspaper reports during 1990/91 which painted the problem of dogs attacking small children as a new and terrifying phenomenon. The resulting media furore led to the governmental pushing ill-conceived legislation through the house. The absurdity of the act in its initial form was highlighted when a dog named Woofie was almost put down for barking at a postman. The act has since been modified on multiple occasions, and is typically held to be a classic example of the medias potential power over government, and the potential problems which can ensue. In conclusion, media influence on voter behaviour is highly variable, and all three theories have merits and weaknesses, with Reinforcement theory and the Agenda setting theory being the most relevant to modern Britain, while empirical data is limited and inconclusive, however, it is certain that the media has less direct influence upon voters than it does upon politicians. The changing nature of the British media has led to politicians being so constricted in their media appearances that it has negatively affected British politics, and those politicians who dare to express themselves are castigated and marginalised. The prominence of 24 hour news, and the rise of TV debates had led to the rise of a new political class primarily comprised of career politicians, or those who have transitioned from politics directly from media-linked jobs, due to their ability to manipulate the media rather than their political beliefs, their character or significant contributions to their party or the nation. The rise of social media has further contributed to the Age of Contempt and the short-termism which it has engendered. While the media has an effect upon voters, it has been far less pronounced than upon politicians. The rise of this new media climate has had a broadly negative effect upon political life. This is exacerbated by the unaccountability of media barons, and their ability to act as self-interested pressure groups to resist regulation. While the age of contempt is preferable to a time of excessive deference, the political culture is has created may be just as damaging in the long term.

Monday, August 19, 2019

Violence in Dracula :: essays research papers

Throughout many types of literature, violence exists to enhance the reader’s interest in order to add a sense of excitement or conflict to a novel. This statement withholds much truthfulness due to the fact that without violence in a piece of literature such as Dracula by Bram Stoker, the plot would not have the same impact if it were lacking violence. So to holds true to that of the movie. The movie bares different characteristics then that of the book. First off, the whole ordeal with the wolf escaping and jumping into Lucy’s, room and Lucy’s mom having a heart attacked is never even mention in the movie. Second, The night when the four men go to Lucy’s grave and find it empty is stated both in the book and in the movie however what unfolds after this is different. Finally, the end of the book differs severely from what Francis Ford Copolas rendition and that of the Bram Stoker see it to be. The differences are as follows†¦ A newspaper clipping from September 18 reports that a large wolf escaped from its cage for a night and returned the next morning, On the night of the 17th, Lucy records how she awakes, frightened by a flapping at the window and a howling outside. Her mother comes in, frightened by the noise, and joins her in bed. Suddenly, the window is broken, and a huge wolf leaps in. Terrified, Lucy's mother tears the garlic wreath from her daughter's neck and then suffers a heart attack and dies. Lucy loses consciousness, and when she regains it, the wolf is gone. The four household maids come in and are terrified by the sight of the body; they go to have a glass of wine, but the liquid is drugged and they pass out. Lucy is left alone, and she hides her diary, writing at the end that the "air seems full of specks, floating and circling . . . and the lights burn blue and dim. (Stoker 117)" This part in the book keeps the reader on the edge of his seat to read as to what will occur next. Is baffling to me as to why Copola decided not to include it in the movie. I think that this primarily had to do with the fact that in the movie Dracula was percieved to be a loving person of sorts and not a monster as thought of in the book. Violence in Dracula :: essays research papers Throughout many types of literature, violence exists to enhance the reader’s interest in order to add a sense of excitement or conflict to a novel. This statement withholds much truthfulness due to the fact that without violence in a piece of literature such as Dracula by Bram Stoker, the plot would not have the same impact if it were lacking violence. So to holds true to that of the movie. The movie bares different characteristics then that of the book. First off, the whole ordeal with the wolf escaping and jumping into Lucy’s, room and Lucy’s mom having a heart attacked is never even mention in the movie. Second, The night when the four men go to Lucy’s grave and find it empty is stated both in the book and in the movie however what unfolds after this is different. Finally, the end of the book differs severely from what Francis Ford Copolas rendition and that of the Bram Stoker see it to be. The differences are as follows†¦ A newspaper clipping from September 18 reports that a large wolf escaped from its cage for a night and returned the next morning, On the night of the 17th, Lucy records how she awakes, frightened by a flapping at the window and a howling outside. Her mother comes in, frightened by the noise, and joins her in bed. Suddenly, the window is broken, and a huge wolf leaps in. Terrified, Lucy's mother tears the garlic wreath from her daughter's neck and then suffers a heart attack and dies. Lucy loses consciousness, and when she regains it, the wolf is gone. The four household maids come in and are terrified by the sight of the body; they go to have a glass of wine, but the liquid is drugged and they pass out. Lucy is left alone, and she hides her diary, writing at the end that the "air seems full of specks, floating and circling . . . and the lights burn blue and dim. (Stoker 117)" This part in the book keeps the reader on the edge of his seat to read as to what will occur next. Is baffling to me as to why Copola decided not to include it in the movie. I think that this primarily had to do with the fact that in the movie Dracula was percieved to be a loving person of sorts and not a monster as thought of in the book.

Sunday, August 18, 2019

Dance In The Early Twentieth Century Essay -- history of jazz

The history of Jazz music is one that is tied to enslavement, and prejudices, and it is impossible to separate the development of Jazz music from the racial oppression that occurred in the United States as they are inextricably connected. Slavery was a part of our country’s development that is shameful and yet, lead to some of the greatest musical advances of the twentieth century. Slavery in the United States first began in 1619 when Dutch traders seized a Spanish slave ship and brought those aboard to the North American colony of Jamestown, Virginia. When the North American continent was first colonized by Europeans, the vast land proved to be more work than they had anticipated and there was a severe shortage of labor. Land owners needed a solution for cheap and plentiful labor to help with the production of profitable crops such as tobacco and rice. Although many land owners already made use of indentured servants- poor youth from Britain and Germany who sought passage to America and would be contracted to work a given number of years before they were granted freedom- they soon realized that in order to continue expansion they would need to employ more labor. This meant bringing more people over from Africa against their own will, almost depleting the African continent of its healthiest and most capable men and women (Slavery in America, 2009). Individuals with African origins were not English by birth, instead they were considered foreigners and outside English Common Law and were not granted equal rights. Many slave owners intended to make their slaves completely dependent on them and prohibited them from learning to read or write. The oppression of black slaves was on the rise and many sources estimate that nearly twelv... ...ca | (2006, August) Scholastic.com. Retrieved April 20, 2014, from http://teacher.scholastic.com/activities/bhistory/history_of_jazz.htm 6) Peretti, B. W. (1992). White Jazz Musicians of the 1920's. The creation of jazz: music, race, and culture in urban America. Urbana: University of Illinois Press. 7) Scaruffi, P. (2005, January 1). A History of Jazz Music. A History of Jazz Music. Retrieved April 26, 2014, from http://www.scaruffi.com/history/jazz1.html 8) Slavery in America. (2009, January 1). History.com. Retrieved April 17, 2014, from http://www.history.com/topics/black-history/slavery 9) Stearns, M. W., & Stearns, J. (1968). Jazz dance; the story of American vernacular dance. New York: Macmillan. 10) White, S., & White, G. J. (2005). The sounds of slavery: discovering African American history through songs, sermons, and speech. Boston: Beacon Press.

Competition Act :: Essays Papers

Competition Act The Competition Act at large focuses on forbidding, respective, agreements between undertakings or concerted practices which may restrict the competition within the market. It forbids all practices, which amount to the abuse of a dominant position in the Market by an undertaking where the practice could potentially, affect trade between its members. The rules of the Act set out the basic framework, providing for the maintenance of effective competition in the market. The Competition Act based on Articles 85 and 86 of the Treaty of Rome provides control to business practices within our market. "The following shall be prohibited as incompatible with the common market: all agreements between undertakings , decisions by associations of undertakings, and concentrated practices which may effect trade between member states and which have as their object or effect the prevention, restriction or distortion of competition within the common market " Therefor any agreement, decision, and practice caught by Section 5(1) must have the following conditions 1. There must be some form of collusion between the undertakings 2. Trade must be affected 3. There must be must some adverse effect on competition. This Section covers such agreements, decisions, practices which: a. Directly or indirectly fix the purchase or selling price or other trading conditions b. Limit or control production , markets, technical development or investment c. Share markets or sources of supply d. Impose the application of dissimilar conditions to equivalent transactions which other parties outside such agreement, thereby placing them at a competitive disadvantage e. Make the conclusion of contrast subject to the acceptance by the other parties of supplementary obligations, which by their nature or according to commercial usage, have no connections, which the subject of such contracts. The competition act analyzes various aspects so as to promote a healthy business environment. It gives a clear picture in respect to positioning in the market. Clearly, the narrower the definition of the relevant market, the greater the importance of an undertakings share of that market. Once one has defined the relevant market, one must determine whether the questioned undertaking has a dominant position in that market. In general, an undertaking has a dominant position if it can act on the market independently from its competitors. Thus, if a seller can ask any price for a product, even though its competitors are selling a similar product for much less, it is likely that the seller in question has a dominant position.

Saturday, August 17, 2019

Chameleon Chips

INTRODUCTION Today's microprocessors sport a general-purpose design which has its own advantages and disadvantages. ? Adv: One chip can run a range of programs. That's why you don't need separate computers for different jobs, such as crunching spreadsheets or editing digital photos ? Disadv: For any one application, much of the chip's circuitry isn't needed, and the presence of those â€Å"wasted† circuits slows things down. Suppose, instead, that the chip's circuits could be tailored specifically for the problem at hand–say, computer-aided design–and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users. So computer scientists are hatching a novel concept that could increase number-crunching power–and trim costs as well. Call it the chameleon chip. Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS). An FPGA is covered with a grid of wires. At each crossover, there's a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U. S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that's optimized for specific problems. The chips still won't change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic. in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market. [pic]A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a â€Å"chip on demand. † In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function. Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players. Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest – but also the least flexible – of all the options. Chameleon chips Highly flexible processors that can be reconfigured remotely in the field, Chameleon's chips are designed to simplify communication system design while delivering increased price/performance numbers. The chameleon chip is a high bandwidth reconfigurable communications processor (RCP). it aims at changing a system's design from a remote location. This will mean more versatile handhelds. Processors operate at 24,000 16-bit million operations per second (MOPS), 3,000 16-bit million multiply-accumulates per second (MMACS), and provide 50 channels of CDMA2000 chip-rate processing. The 0. 25-micron chip, the CS2112 is an example. These new chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. an example of such kind of a chip is a chameleon chip. this can also be called a â€Å"chip on demand† â€Å"Reconfigurable computing goes a step beyond programmable chips in the matter of flexibility. It is not only possible but relatively commonplace to â€Å"rewrite† the silicon so that it can perform new functions in a split second. Reconfigurable chips are simply the extreme end of programmability. † The overall performance of the ACM can surpass the DSP because the ACM only constructs the actual hardware needed to execute the software, whereas DSPs and microprocessors force the software to fit its given architecture. One reason that this type of versatility is not possible today is that handheld gadgets are typically built around highly optimized specialty chips that do one thing really well. These chips are fast and relatively cheap, but their circuits are literally written in stone — or at least in silicon. A multipurpose gadget would have to have many specialized chips — a costly and clumsy solution. Alternately, you could use a general-purpose microprocessor, like the one in your PC, but that would be slow as well as expensive. For these reasons, chip designers are turning increasingly to reconfigurable hardware—integrated circuits where the architecture of the internal logic elements can be arranged and rearranged on the fly to fit particular applications. Designers of multimedia systems face three significant challenges in today's ultra-competitive marketplace: Our products must do more, cost less, and be brought to the market quicker than ever. Though each of these goals is individually attainable, the hat trick is generally unachievable with traditional design and implementation techniques. Fortunately, some new techniques are emerging from the study of reconfigurable computing that make it possible to design systems that satisfy all three requirements simultaneously. Although originally proposed in the late 1960s by a researcher at UCLA, reconfigurable computing is a relatively new field of study. The decades-long delay had mostly to do with a lack of acceptable reconfigurable hardware. Reprogrammable logic chips like field programmable gate arrays (FPGAs) have been around for many years, but these chips have only recently reached gate densities making them suitable for high-end applications. (The densest of the current FPGAs have approximately 100,000 reprogrammable logic gates. ) With an anticipated doubling of gate densities every 18 months, the situation will only become more favorable from this point forward. The primary product is a groundstation equipment for satellite communications. This application involves high-rate communications, signal processing, and a variety of network protocols and data formats. ADVANTAGES AND APPLICATIONS Its applications are in, ? data-intensive Internet ? DSP ? wireless basestations ? voice compression ? software-defined radio ? high-performance embedded telecom and datacom applications ? xDSL concentrators ? fixed wireless local loop ? multichannel voice compression ? multiprotocol packet and cell processing protocols Its advantages are ? can create customized communications signal processors ? increased erformance and channel count ? can more quickly adapt to new requirements and standards ? lower development costs and reduce risk. FPGA One of the most promising approaches in the realm of reconfigurable architecture is a technology called â€Å"field-programmable gate arrays. † The strategy is to build uniform arrays of thousands of logic elements, each of which can take on the personality of different, fundamental component s of digital circuitry; the switches and wires can be reprogrammed to operate in any desired pattern, effectively rewiring a chip's circuitry on demand. A designer can download a new wiring pattern and store it in the chip's memory, where it can be easily accessed when needed. Not so hard after all Reconfigurable hardware first became practical with the introduction a few years ago of a device called a â€Å"field-programmable gate array† (FPGA) by Xilinx, an electronics company that is now based in San Jose, California. An FPGA is a chip consisting of a large number of â€Å"logic cells†. These cells, in turn, are sets of transistors wired together to perform simple logical operations. Evolving FPGAs FPGAs are arrays of logic blocks that are strung together through software commands to implement higher-order logic functions. Logic blocks are similar to switches with multiple inputs and a single output, and are used in digital circuits to perform binary operations. Unlike with other integrated circuits, developers can alter both the logic functions performed within the blocks and the connections between the blocks of FPGAs by sending signals that have been programmed in software to the chip. FPGA blocks can perform the same high-speed hardware functions as fixed-function ASICs, and—to distinguish them from ASICs—they can be rewired and reprogrammed at any time from a remote location through software. Although it took several seconds or more to change connections in the earliest FPGAs, FPGAs today can be configured in milliseconds. Field-programmable gate arrays have historically been applied as what is called glue logic in embedded systems, connecting devices with dissimilar bus architectures. They have often been used to link digital signal processors—cpus used for digital signal processing—to general-purpose cpus. The growth in FPGA technology has lifted the arrays beyond the simple role of providing glue logic. With their current capabilities, they clearly now can be classed as system-level components just like cpus and DSPs. The largest of the FPGA devices made by the company with which one of the authors of this article is affiliated, for example, has more than 150 billion transistors, seven times more than a Pentium-class microprocessor. Given today's time-to-market pressures, it is increasingly critical that all system-level components be easy to integrate, especially since the phase involving the integration of multiple technologies has become the most time-consuming part of a product's development cycle. To Integrating Hardware and Software systems designers producing mixed cpu and FPGA designs can take advantage of deterministic real-time operating systems (RTOSs). Deterministic software is suited for controlling hardware. As such, it can be used to efficiently manage the content of system data and the flow of such data from a cpu to an FPGA. FPGA developers can work with RTOS suppliers to facilitate the design and deployment of systems using combinations of the two technologies. FPGAs operating in conjunction with embedded design tools provide an ideal platform for developing high-performance reconfigurable computing solutions for medical instrument applications. The platform supports the design, development, and testing of embedded systems based on the C language. Integration of FPGA technology into systems using a deterministic RTOS can be streamlined by means of an enhanced application programming interface (API). The blending of hardware, firmware, application software, and an RTOS into a platform-based approach removes many of the development barriers that still limit the functionality of embedded applications. Development, profiling, and analysis tools are available that can be used to analyze computational hot spots in code and to perform low-level timing analysis in multitasking environments. One way developers can use these analytical tools is to determine when to design a function in hardware or software. Profiling enables them to quickly identify functionality that is frequently used or computationally intensive. Such functions may be prime candidates for moving from software to FPGA hardware. An integrated suite of run-time analysis tools with a run-time error checker and visual interactive profiler can help developers create higher-quality, higher-performance code in little time. An FPGA consists of an array of configurable logic blocks that implement the logical functions. In FPGA's, the logic functions performed within the logic blocks, and sending signals to the chip can alter the connections between the blocks. These blocks are similar in structure to the gate arrays used in some ASIC's, but whereas standard gate arrays are configured and fixed during manufacture, the configurable logic blocks in new FPGA's can be rewired and reprogrammed repeatedly in around a microsecond. One advantages of FPGA is that it needs small time to market Flexibility and Upgrade advantages Cheap to make . We can configure an FPGA using Very High Density Language [VHDL] Handel C Java . FPGA’s are used presently in Encryption Image Processing Mobile Communications . FPGA’s can be used in 4G mobile communication The advantages of FPGAs are that Field programmable gate arrays offer companies the possibility of develloping a chip very quickly, since a chip can be configured by software. A chip can also be reconfigured, either during execution time, or as part of an upgrade to allow new applications, simply by loading new configuration into the chip. The advantages can be seen in terms of cost, speed and power consumption. The added functionality of multi-parallelism allows one FPGA to replace multiple ASIC’s. The applications of FPGA’s are in ? image processing ? encryption ? mobile communication memory management and digital signal processing ? telephone units ? mobile base stations. Although it is very hard to predict the direction this technology will take, it seems more than likely that future silicon chips will be a combination of programmable logic, memory blocks and specific function blocks, such as floating point units. It is hard to predict at this early stage, but it lo oks likely that the technology will have to change over the coming years, and the rate of change for major players in todays marketplace such as Intel, Microsoft and AMD will be crucial to their survival. The precise behaviour of each cell is determined by loading a string of numbers into a memory underneath it. The way in which the cells are interconnected is specified by loading another set of numbers into the chip. Change the first set of numbers and you change what the cells do. Change the second set and you change the way they are linked up. Since even the most complex chip is, at its heart, nothing more than a bunch of interlinked logic circuits, an FPGA can be programmed to do almost anything that a conventional fixed piece of logic circuitry can do, just by loading the right numbers into its memory. And by loading in a different set of numbers, it can be reconfigured in the twinkling of an eye. Basic reconfigurable circuits already play a huge role in telecommunications. For instance, relatively simple versions made by companies such as Xilinx and Altera are widely used for network routers and switches, enabling circuit designs to be easily updated electronically without replacing chips. In these early applications, however, the speed at which the chips reconfigure themselves is not critical. To be quick enough for personal information devices, the chips will need to completely reconfigure themselves in a millisecond or less. â€Å"That kind of chameleon device would be the killer app of reconfigurable computing† These experts predict that in the next couple of years reconfigurable systems will be used in cell phones to handle things like changes in telecommunications systems or standards as users travel between calling regions — or between countries. As it is getting more expensive and difficult to pattern, or etch, the elaborate circuitry used in microprocessors; many experts have predicted that maintaining the current rate of putting more circuits into ever smaller spaces will, sometime in the next 10 to 15 years, result in features on microchips no bigger than a few atoms, which would demand a nearly impossible level of precision in fabricating circuitry But reconfigurable chips don't need that type of precision and we can make computers that function at the nanoscale level. CS2112 (a reconfigurable processor developed by chameleon systems) RCP architecture is designed to be as flexible as an FPGA, and as easy to program as a digital signal processor (DSP), with real-time, visual debugging capability. The development environment, comprising Chameleon's C-SIDE software tool suite and CT2112SDM development kit, enables customers to develop and debug communication and signal processing systems running on the RCP. The RCP's development environment helps overcome a fundamental design and debug challenge facing communication system designers. In order to build sufficient performance, channel capacity, and flexibility into their systems, today's designers have been forced to employ an amalgamation of DSPs, FPGAs and ASICs, each of which requires a unique design and debug environment. The RCP platform was designed from the ground up to alleviate this problem: first by significantly exceeding the performance and channel capacity of the fastest DSPs; second by integrating a complete SoC subsystem, including an embedded microprocessor, PCI core, DMA function, and high-speed bus; and third by consolidating the design and debug environment into a single platform-based design system that affords the designer comprehensive visibility and control. The C-SIDE software suite includes tools used to compile C and assembly code for execution on the CS2112's embedded microprocessor, and Verilog simulation and synthesis tools used to create parallel datapath kernels which run on the CS2112's reconfigurable processing fabric. In addition to code generation tools, the package contains source-level debugging tools that support simulation and real-time debugging. Chameleon's design approach leverages the methods employed by most of today's communications system designers. The designer starts with a C program that models signal processing functions of the baseband system. Having identified the dataflow intensive functional blocks, the designer implements them in the RCP to accelerate them by 10- to 100-fold. The designer creates equivalent functions for those blocks, called kernels, in Chameleon's reconfigurable assembly language-like design entry language. The assembler then automatically generates standard Verilog for these kernels that the designer can verify with commercial Verilog simulators. Using these tools, the designer can compare testbench results for the original C functions with similar results for the Verilog kernels. In the next phase, the designer synthesises the Verilog kernels using Chameleon's synthesis tools targeting Chameleon technology. At the end, the tools output a bit file that is used to configure the RCP. The designer then integrates the application level C code with Verilog kernels and the rest of the standard C function. Chameleon's C-SIDE compiler and linker technology makes this integration step transparent to the designer. The CS2112 development environment makes all chip registers and memory locations accessible through a development console that enables full processor-like debugging, including features like single-stepping and setting breakpoints. Before actually productising the system, the designer must often perform a system-level simulation of the data flow within the context of the overall system. Chameleon's development board enables the designer to connect multiple RCPs to other devices in the system using the PCI bus and/or programmable I/O pins. This helps prove the design concept, and enables the designer to profile the performance of the whole basestation system in a real-world environment. With telecommunications OEMs facing shrinking product life cycles and increasing market pressures, not to mention the constant flux of protocols and standards, it's more necessary than ever to have a platform that's reconfigurable. This is where the chameleon chips are going to make its effect felt. The Chameleon CS2112 Package is a high-bandwidth, reconfigurable communications processor aimed at ? second- and third-generation wireless base stations fixed point wireless local loop (WLL) ? voice over IP ? DSL(digital subscriber line) ? High end dsp operations ? 2G-3G wireless base stations ? software defined radio ? security processing â€Å"Traditional solutions such as FPGAs and DSPs lack the performance for high-bandwidth applications, and fixed function solutions like ASICs incur unacceptable limits Each product in the CS2000 family has the same fundamental functional blocks: a 32-bit RISC processor, a full-featured memory controller, a PCI controller, and a reconfigurable processing fabric, all of which are interconnected by a high-speed system bus. The above mentioned fabric comprises an array of reconfigurable tiles used to implement the desired algorithms. Each tile contains seven 32-bit reconfigurable datapath units, four blocks of local store memory, two 16Ãâ€"24-bit multipliers, and a control logic unit. Basic Architecture [pic] Components: ? 32-bit Risc ARC processor @125MHz ? 64 bit memory controller ? 32 bit PCI controller ? reconfigurable processing fabric (RPF) ? high speed system bus ? programmable I/O (160 pins) ? DMA Subsystem ? Configuration Subsystem More on the architecture of RPF 4 Slices with 3 Tiles in each. Each tile can be reconfigured at runtime Tiles contain : †¢ Datapath Units †¢ Local Store Memories †¢ 16Ãâ€"24 multipliers †¢ Control Logic Unit The C-SIDE design system is a fully integrated tool suite, with C compiler, Verilog synthesizer, full-chip simulator, as well as a debug and verification environment — an element not readily found in ASIC and FPGA design flows, according to Chameleon. Still, reconfigurable chips represent an attempt to combine the best features of hard-wired custom chips, which are fast and cheap, and programmable logic device (PLD) chips, which are flexible and easily brought to market. Unlike PLDs, QuickSilver's reconfigurable chips can be reprogrammed every few nanoseconds, rewiring circuits so they are processing global positioning satellite signals one moment or CDMA cellular signals the next, Think of the chips as consisting of libraries with preset hardware designs and chalkboards. Upon receiving instructions from software, the chip takes a hardware component from the library (which is stored as software in memory) and puts it on the chalkboard (the chip). The chip wires itself instantly to run the software and dispatches it. The hardware can then be erased for the next cycle. With this style of computing, its chips can operate 80 times as fast as a custom chip but still consume less power and board space, which translates into lower costs. The company believes that â€Å"soft silicon,† or chips that can be reconfigured on the fly, can be the heart of multifunction camcorders or digital television sets. With programmable logic devices, designers use inexpensive software tools to quickly develop, simulate, and test their designs. Then, a design can be quickly programmed into a device, and immediately tested in a live circuit. The PLD that is used for this prototyping is the exact same PLD that will be used in the final production of a piece of end equipment, such as a network router, a DSL modem, a DVD player, or an automotive navigation system. The two major types of programmable logic devices are field programmable gate arrays (FPGAs) and complex programmable logic devices (CPLDs). Of the two, FPGAs offer the highest amount of logic density, the most features, and the highest performance FPGAs are used in a wide variety of applications ranging from data processing and storage, to instrumentation, telecommunications, and digital signal processing. To overcome these limitations and offer a flexible, cost-effective solution, many new entrants to the DSP market are extolling the virtues of configurable and reconfigurable DSP designs. This latest breed of DSP architectures promises greater flexibility to quickly adapt to numerous and fast-changing standards. Plus, they claim to achieve higher performance without adding silicon area, cost, design time, or power consumption. In essence, because the architecture isn't rigid, the reconfigurable DSP lets the developer tailor the hardware for a specific task, achieving the right size and cost for the target application. Moreover, the same platform can be reused for other applications. Because development tools are a critical part of this solution—in fact, they're true enablers—the newcomers also ensure that the tools are robust and tightly linked to the devices' flexible architectures. While providing an intuitive, integrated development environment for the designers, the manufacturers ensure affordability as well. RECONFIGURING THE ARCHITECTURE Some of the new configurable DSP architectures are reconfigurable too—that is, developers can modify their landscape on the fly, depending on the incoming data stream. This capability permits dynamic reconfigurability of the architecture as demanded by the application. Proponents of such chips are proclaiming an era of â€Å"chip-on-demand,† wherein new algorithms can be accommodated on-chip in real time via software. This eliminates the cumbersome job of fitting the latest algorithms and protocols into existing rigid hardware. A reconfigurable communications processor (RCP) can reconfigured for different processing algorithms in one clock cycle. Chameleon designers are revising the architecture to create a chip that can address a much broader range of applications. Plus, the supplier is preparing a new, more user-friendly suite of tools for traditional DSP designers. Thus, the company is dropping the term reconfigurability for the new architecture and going with a more traditional name, the streaming data processor (SDP). Though the SDP will include a reconfigurable processing fabric, it will be substantially altered, the company says. Unlike the older RCP, the new chip won't have the ARM RISC core, and it will support a much higher clock rate. Additionally, it will be implemented in a 0. 13- µm CMOS process to meet the signal processing needs of a much broader market. Further details await the release of SDP sometime in the first quarter of 2003. While Chameleon is in the redesign mode, QuickSilver Technologies is in the test mode. This reconfigurable proponent, which prefers to call its architecture an adaptive computing machine or ACM, has realized its first silicon test chip. In fact, the tests indicate that it outperforms a hardwired, fixed-function ASIC in processing compute-intensive cdma2000 algorithms, like system acquisition, rake finger, and set maintenance. For example, the ASIC's nominal speed for searching 215 phase offsets in a basic multipath search algorithm is 3. seconds. The ACM test chip took just one second at a 25-MHz clock speed to perform the same number of searches in a cdma2000 handset. Likewise, the device accomplishes over 57,000 adaptations per second in rake-finger operation to cycle through all operations in this application every 52  µs (Fig. 1). In the set-maintenance application, the chip is almost three times fa ster than an ASIC, claims QuickSilver. THE power of a computer stems from the fact that its behaviour can be changed with little more than a dose of new software. A desktop PC might, for example, be browsing the Internet one minute, and running a spreadsheet or entering the virtual world of a computer game the next. But the ability of a microprocessor (the chip that is at the heart of any PC) to handle such a variety of tasks is both a strength and a weakness—because hardware dedicated to a particular job can do things so much faster. Recognising this, the designers of modern PCs often hand over such tasks as processing 3-D graphics, decoding and playing movies, and processing sound—things that could, in theory, be done by the basic microprocessor—to specialist chips. These chips are designed to do their particular jobs extremely fast, but they are inflexible in comparison with a microprocessor, which does its best to be a jack-of-all-trades. So the hardware approach is faster, but using software is more flexible. At the moment, such reconfigurable chips are used mainly as a way of conjuring up specialist hardware in a hurry. Rather than designing and building an entirely new chip to carry out a particular function, a circuit designer can use an FPGA instead. This speeds up the design process enormously, because making changes becomes as simple as downloading a new configuration into the chip. Chameleon Systems also develops reconfigurable chips for the high-end telecom-switching market. RECONFIGURABLE PROCESSORS A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a â€Å"chip on demand. † In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function. Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players. While microprocessors have been the dominant devices in use for general-purpose computing for the last decade, there is still a large gap between the computational efficiency of microprocessors and custom silicon. Reconfigurable devices, such as FPGAs, have come closer to closing that gap, offering a 10x benefit in computational density over microprocessors, and often offering another potential 10x improvement in yielded functional density on low granularity operations. On highly regular computations, reconfigurable architectures have a clear superiority to traditional processor architectures. On tasks with high functional diversity, microprocessors use silicon more efficiently than reconfigurable devices. The BRASS project is developing a coupled architecture which allow a reconfigurable array and processor core to cooperate efficiently on computational tasks, exploiting the strengths of both architectures. We are developing an architecture and a prototype component that will combine a processor and a high performance reconfigurable array on a single chip. The reconfigurable array extends the usefulness and efficiency of the processor by providing the means to tailor its circuits for special tasks. The processor improves the efficiency of the reconfigurable array for irregular, general-purpose computation. We anticipate that a processor combined with reconfigurable resources can achieve a significant performance improvement over either a separate processor or a separate reconfigurable device on an interesting range of problems drawn from embedded computing applications. As such, we hope to demonstrate that this composite device is an ideal system element for embedded processing. Reconfigurable devices have proven extremely efficient for certain types of processing tasks. The key to their cost/performance advantage is that conventional processors are often limited by instruction bandwidth and execution restrictions or by an insufficient number or type of functional units. Reconfigurable logic exploits more program parallelism. By dedicating significantly less instruction memory per active computing element, reconfigurable devices achieve a 10x improvement in functional density over microprocessors. At the same time this lower memory ratio allows reconfigurable devices to deploy active capacity at a finer grained level, allowing them to realize a higher yield of their raw capacity, sometimes as much as 10x, than conventional processors. The high functional density characteristic of reconfigurable devices comes at the expense of the high functional diversity characteristic of microprocessors. Microprocessors have evolved to a highly optimized configuration with clear cost/performance advantages over reconfigurable arrays for a large set of tasks with high functional diversity. By combining a reconfigurable array with a processing core we hope to achieve the best of both worlds. While it is possible to combine a conventional processor with commercial reconfigurable devices at the circuit board level, integration radically changes the i/o costs and design point for both devices, resulting in a qualitatively different system. Notably, the lower on-chip communication costs allow efficient cooperation between the processor and array at a finer grain than is sensible with discrete designs. RECONFIGURABLE COMPUTING When we talk about reconfigurable computing we’re usually talking about FPGA-based system designs. Unfortunately, that doesn’t qualify the term precisely enough. System designers use FPGAs in many different ways. The most common use of an FPGA is for prototyping the design of an ASIC. In this scenario, the FPGA is present only on the prototype hardware and is replaced by the corresponding ASIC in the final production system. This use of FPGAs has nothing to do with reconfigurable computing. However, many system designers are choosing to leave the FPGAs as part of the production hardware. Lower FPGA prices and higher gate counts have helped drive this change. Such systems retain the execution speed of dedicated hardware but also have a great deal of functional flexibility. The logic within the FPGA can be changed if or when it is necessary, which has many advantages. For example, hardware bug fixes and upgrades can be administered as easily as their software counterparts. In order to support a new version of a network protocol, you can redesign the internal logic of the FPGA and send the enhancement to the affected customers by email. Once they’ve downloaded the new logic design to the system and restarted it, they’ll be able to use the new version of the protocol. This is configurable computing; reconfigurable computing goes one step further. Reconfigurable computing involves manipulation of the logic within the FPGA at run-time. In other words, the design of the hardware may change in response to the demands placed upon the system while it is running. Here, the FPGA acts as an execution engine for a variety of different hardware functions — some executing in parallel, others in serial — much as a CPU acts as an execution engine for a variety of software threads. We might even go so far as to call the FPGA a reconfigurable processing unit (RPU). Reconfigurable computing allows system designers to execute more hardware than they have gates to fit, which works especially well when there are parts of the hardware that are occasionally idle. One theoretical application is a smart cellular phone that supports multiple communication and data protocols, though just one a time. When the phone passes from a geographic region that is served by one protocol into a region that is served by another, the hardware is automatically reconfigured. This is reconfigurable computing at its best, and using this approach it is possible to design systems that do more, cost less, and have shorter design and implementation cycles. Reconfigurable computing has several advantages. ? First, it is possible to achieve greater functionality with a simpler hardware design. Because not all of the logic must be present in the FPGA at all times, the cost of supporting additional features is reduced to the cost of the memory required to store the logic design. Consider again the multiprotocol cellular phone. It would be possible to support as many protocols as could be fit into the available on-board ROM. It is even conceivable that new protocols could be uploaded from a base station to the handheld phone on an as-needed basis, thus requiring no additional memory. ? The second advantage is lower system cost, which does not manifest itself exactly as you might expect. On a low-volume product, there will be some production cost savings, which result from the elimination of the expense of ASIC design and fabrication. However, for higher-volume products, the production cost of fixed hardware may actually be lower. We have to think in terms of lifetime system costs to see the savings. Systems based on reconfigurable computing are upgradable in the field. Such changes extend the useful life of the system, thus reducing lifetime costs. ? The final advantage of reconfigurable computing is reduced time-to-market. The fact that you’re no longer using an ASIC is a big help in this respect. There are no chip design and prototyping cycles, which eliminates a large amount of development effort. In addition, the logic design remains flexible right up until (and even after) the product ships. This allows an incremental design flow, a luxury not typically available to hardware designers. You can even ship a product that meets the minimum requirements and add features after deployment. In the case of a networked product like a set-top box or cellular telephone, it may even be possible to make such enhancements without customer involvement. RECONFIGURABLE HARDWARE Traditional FPGAs are configurable, but not run-time reconfigurable. Many of the older FPGAs expect to read their configuration out of a serial EEPROM, one bit at a time. And they can only be made to do so by asserting a chip reset signal. This means that the FPGA must be reprogrammed in its entirety and that its previous internal state cannot be captured beforehand. Though these features are compatible with configurable computing applications, they are not sufficient for reconfigurable computing. In order to benefit from run-time reconfiguration, it is necessary that the FPGAs involved have some or all of the following features. The more of these features they have, the more flexible can be the system design. Deciding which hardware objects to execute and when Swapping hardware objects into and out of the reconfigurable logic Performing routing between hardware objects or between hardware objects and the hardware object framework. Of course, having software manage the reconfigurable hardware usually means having an embedded processor or microcontroller on-board. (We expect several vendors to introduce single-chip solutions that combine a CPU core and a block of reconfigurable logic by year’s end. The embedded software that runs there is called the run-time environment and is analogous to the operating system that manages the execution of multiple software threads. Like threads, hardware objects may have priorities, deadlines, and contexts, etc. It is the job of the run-time environment to organize this information and make decisions based upon it. The reason we need a run-time environment at all is th at there are decisions to be made while the system is running. And as human designers, we are not available to make these decisions. So we impart these responsibilities to a piece of software. This allows us to write our application software at a very high level of abstraction. To do this, the run-time environment must first locate space within the RPU that is large enough to execute the given hardware object. It must then perform the necessary routing between the hardware object’s inputs and outputs and the blocks of memory reserved for each data stream. Next, it must stop the appropriate clock, reprogram the internal logic, and restart the RPU. Once the object starts to execute, the run-time environment must continuously monitor the hardware object’s status flags to determine when it is done executing. Once it is done, the caller can be notified and given the results. The run-time environment is then free to reclaim the reconfigurable logic gates that were taken up by that hardware object and to wait for additional requests to arrive from the application software. The principal benefits of reconfigurable computing are the ability to execute larger hardware designs with fewer gates and to realize the flexibility of a software-based solution while retaining the execution speed of a more traditional, hardware-based approach. This makes doing more with less a reality. In our own business we have seen tremendous cost savings, simply because our systems do not become obsolete as quickly as our competitors because reconfigurable computing enables the addition of new features in the field, allows rapid implementation of new standards and protocols on an as-needed basis, and protects their investment in computing hardware. Whether you do it for your customers or for yourselves, you should at least consider using reconfigurable computing in your next design. You may find, as we have, that the benefits far exceed the initial learning curve. And as reconfigurable computing becomes more popular, these benefits will only increase. ADVANTAGES OF RECONFIGURABILITY The term reconfigurable computing has come to refer to a loose class of embedded systems. Many system-on-a-chip (SoC) computer designs provide reconfigurability options that provide the high performance of hardware with the flexibility of software. To most designers, SoC means encapsulating one or more processing elements—that is, general-purpose embedded processors and/or digital signal processor (DSP) cores—along with memory, input/output devices, and other hardware into a single chip. These versatile chips can erform many different functions. However, while SoCs offer choices, the user can choose only among functions that already reside inside the device. Developers also create ASICs—chips that handle a limited set of tasks but do them very quickly. The limitation of most types of complex hardware devices—SoCs, ASICs, and general-purp ose cpus—is that the logical hardware functions cannot be modified once the silicon design is complete and fabricated. Consequently, developers are typically forced to amortize the cost of SoCs and ASICs over a product lifetime that may be extremely short in today's volatile technology environment. Solutions involving combinations of cpus and FPGAs allow hardware functionality to be reprogrammed, even in deployed systems, and enable medical instrument OEMs to develop new platforms for applications that require rapid adaptation to input. The technologies combined provide the best of both worlds for system-level design. Careful analysis of computational requirements reveals that many algorithms are well suited to high-speed sequential processing, many can benefit from parallel processing capabilities, and many can be broken down into components that are split between the two. With this in mind, it makes sense to always use the best technology for the job at hand. Processors are best suited to general-purpose processing and high-speed sequential processing (as are DSPs), while FPGAs excel at high-speed parallel processing. The general-purpose capability of the cpu enables it to perform system management very well, and allows it to be used to control the content of the FPGAs contained in the system. This symbiotic relationship between cpus and FPGAs also means that the FPGA can off-load computationally intensive algorithms from the cpu, allowing the processor to spend more time working on general-purpose tasks such as data analysis, and more time communicating with a printer or other equipment. Conclusion These new chips called chameleon chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. an example of such kind of a chip is a chameleon chip. his can also be called a â€Å"chip on demand† Reconfigurable computing goes a step beyond programmable chips in the matter of flexibility. It is not only possible but relatively commonplace to â€Å"rewrite† the silicon so that it can perform new functions in a split second. Reconfigurable chips are simply the extreme end of programmability. † Highly flexible processors that can be reconfigured remotely in the field, Chameleon's chips are des igned to simplify communication system design while delivering increased price/performance numbers. The chameleon chip is a high bandwidth reconfigurable communications processor (RCP). it aims at changing a system's design from a remote location. this will mean more versatile handhelds. Its applications are in, data-intensive Internet,DSP,wireless basestations, voice compression, software-defined radio, high-performance embedded telecom and datacom applications, xDSL concentrators,fixed wireless local loop, multichannel voice compression, multiprotocol packet and cell processing protocols. Its advantages are that it can create customized communications signal processors ,it has increased performance and channel count, and it can more quickly adapt to new requirements and standards and it has lower development costs and reduce risk. A FUTURISTIC DREAM One day, someone will make a chip that does everything for the ultimate consumer device. The chip will be smart enough to be the brains of a cell phone that can transmit or receive calls anywhere in the world. If the reception is poor, the phone will automatically adjust so that the quality improves. At the same time, the device will also serve as a handheld organizer and a player for music, videos, or games. Unfortunately, that chip doesn't exist today. It would require †¢ flexibility †¢ high performance †¢ low power †¢ and low cost But we might be getting closer. Now a new kind of chip may reshape the semiconductor landscape. The chip adapts to any programming task by effectively erasing its hardware design and regenerating new hardware that is perfectly suited to run the software at hand. These chips, referred to as reconfigurable processors, could tilt the balance of power that has preserved a decade-long standoff between programmable chips and hard-wired custom chips. These new chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. an example of such kind of a chip is a chameleon chip. this can also be called a â€Å"chip on demand† â€Å"Reconfigurable computing goes a step beyond programmable chips in the matter of flexibility. It is not only possible but relatively commonplace to â€Å"rewrite† the silicon so that it can perform new functions in a split second. Reconfigurable chips are simply the extreme end of programmability. † If these adaptable chips can reach a cost-performance parity with hard-wired chips, customers will chuck the static hard-wired solutions. And if silicon can indeed become dynamic, then so will the gadgets of the information age. No longer will you have to buy a camera and a tape recorder. You could just buy one gadget, and then download a new function for it when you want to take some pictures or make a recording. Just think of the possibilities for the fickle consumer. Programmable logic chips, which are arrays of memory cells that can be programmed to perform hardware functions using software tools, are more flexible than DSP chips but slower and more expensive For consumers, this means that the day isn't far away when a cell phone can be used to talk, transmit video images, connect to the Internet, maintain a calendar, and serve as entertainment during travel delays — without the need to plug in adapter hardware REFERENCES BOOKS Wei Qin Presentation , Oct 2000 (The part of the presentation regarding CS2000 is covered in this page) †¢ IEEE conference on Tele-communication, 2001. WEBSITES †¢ www. chameleon systems. com †¢ www. thinkdigit. com †¢ www. ieee. org †¢ www. entecollege. com †¢ www. iec. org †¢ www. quicksilver technologies. com †¢ www. xilinx. com ABSTRACT Chameleon chips are chips whose circuitry can be tailored specifically for the p roblem at hand. Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS). An FPGA is covered with a grid of wires. At each crossover, there's a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U. S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that's optimized for specific problems. The chips still won't change colors. But they may well color the way we use computers in years to come. It is a fusion between custom integrated circuits and programmable logic. n the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market. CONTENTS ? INTRODUCTION ? CHAMELEON CHIPS ? ADVANTAGES AND APPLICATION ? FPGA ? CS2112 ? RECONFIGURING T HE ARCHITECTURE ? RECONFIGURABLE PROCESSORS ? RECONFIGURABLE COMPUTING ? RECONFIGURABLE HARDWARE ? ADVANTAGES OF RECONFIGURABILITY ? CONCLUSION [pic]