egumasa's picture
push
1f2aa3c
TEXT_LIST_BAWE = [
'''
<9.2.3 EXTRAPOSED that-clause with verbs (Biber, 2021)>
It’s a wonder he’s got any business at all!
It seemed however that in-pig sows showed more stress than empty ones. (Verb)
It now appears that I will be expected to part with a further portion of my income as a graduate tax. (Verb)
It follows that frequentist probability is conceptually inadequate for the design or licensing of hazardous facilities. (Verb)
It has been shown that sites near the mushroom bodies control the production of normal song-rhythms. (Verb)
''',
'''
It just never crossed their minds that it might happen. (conv†) cf. That it might happen just never crossed their minds.
It’s good to see them in the bath. (conv†) cf. To see them in the bath is good.
It had taken him 26 years to return. (news†) cf. To return had taken him 26 years.
It seems odd that I should be expected to pay for the privilege of assisting in this way. (news) cf. That I should be expected … (seems odd)''',
'''
<POST PREDICATE>.
The minister is confident that Pakistan could deflect western pressure. (post-predicate)
I'm sure that they'd got two little rooms on the ground floor (post-predicate)
They are undoubtedly right that it has now become clear that the government will not pay for the expansion it desires. (post-predicate)
I'm afraid it brings the caterpillars in. (post-predicate)
I'm sorry I hit you just now. (post-predicate)
Ellen was pleased that Tobie had remembered. (post-predicate)
The president himself can hardly have been surprised that his own head was now being demanded on a platter. (post-predicate)
''',
'''
<9.2.4 Subject noun phrases with subject predicative that-clauses (Biber, 2021)>
The problem is that the second question cannot be answered until Washington comes up with a consensus on the first.
The problem about heroin is that the money is so good that even good people do it.
The only problem may be that the compound is difficult to remove after use.
Another reason to use Ohio as a surrogate for the country as a whole is that the data base for hazardous waste generation and flow for the state are fairly good.
The net result is that foreign money has frequently ended up fertilising or irrigating opium fields.
Our first conclusion at this point was that it is necessary to support the specification and application of regulations and patterns in groups.
The truth is that the country is now specialising more in processing and marketing.
''',
'''
It’s nice that people say it to you unprompted. (extraposed that with adj)
It has been clear for some time that the demands of the arms control process would increasingly dominate military planning. (extraposed that with adjective)
But already it is certain that the challenges ahead are at least as daunting as anything the Cold War produced. (extraposed that with adjective)
It is obvious that direct chilling of the udder depends as much on the thermal properties of the floor as on the air temperature. (extraposed that with adjective)
It is unlikely that any insect exceeds about twice this velocity. (extraposed that with adjective)
It is good that our clan holds the Ozo title in high esteem. (extraposed that with adjective)
It’s horrible that he put up with Claire’s nagging. (extraposed that with adjective)
It is tragic that so many of his generation died as they did. (extraposed that with adjective)
It is unfair that one sector of the water industry should be treated more favourably than another. (extraposed that with adjective)
It is conceivable that this critical stage would not be reached before temperatures began to rise again in the spring. (extraposed that with adjective)
It is preferable that the marked cells [should be] identical in their behaviour to the unmarked cells. (extraposed that with adjective)
It is sensible that the breeding animals [receive] the highest protection. (extraposed that with adjective)
It is essential that the two instruments should run parallel to the microscope stage. (extraposed that with adjective)
It is vital that leaking water is avoided. (extraposed that with adjective)
It is important that it be well sealed from air leakage. (extraposed that with adjective)
It is desirable that it be both lined and insulated. (extraposed that with adjective)
It now seems unlikely that his depends on an oriented layer of wax molecules subject to disruption at a critical transition temperature. (acad)
It is possible that variations in the colour and intensity of light reflected from these structures help to confuse predators as to the size and distance of the insect. (acad)
It is now generally accepted that wings arose, perhaps in he early Devonian, as lateral expressions of thoracic terga. (acad)
It is perhaps more likely that they were associated with locomotion from the beginning. (acad)
It is interesting that in Stenobothrus rubicundus the same nervous mechanism can induce two different activities: <...> (acad†)
It is certain that the challenges ahead are at least as daunting as anything the cold war produced. (news)
It was obvious that no subjects could perceive the movement at a normal distance. (acad†)
It is vitally important that both groups are used to support one another. (acad†)
''',
'''It was yesterday that I went to your house.
It's not the technology which is wrong.
It is us who have not learned how to use it.
It is the second of these points that I shall be taking about.
It wasn't until 1989 that we finally notice it.
It was only by sheer luck that I noticed the key was missing.
It was at this stage that the role of the DCSL became particularly important.
It was he who had given Billy morphine.
Some people say it was him that wrote it.
''',
'''
It was in 1903 that Emmeline Pankhurst and her daughter Christabel founded the Women's Social and Political Union (WSPU), following around forty years of organised campaign for female suffrage organisations in Britain (Banks 1981).
After another fifteen years of campaign, interrupted by the First World War, women were finally granted the vote in 1918 through the Representation of People Act.
It remains, however, a controversial matter as to whether the militant campaigns of the WSPU actually helped to quicken the vote for women, or not.
In order to get a clearer picture of the tactics of the suffragettes, it is worthwhile to take a closer look at their use of the female body in violent, unconventional and often illegal ways, to draw attention to their cause.
--- Para SEP ---
The connection of British femininity with a high morality, and the idea of gender equalities through historical argumentation were common ground arguments for the vote in the late nineteenth century.
Although arguments of the 'constitutionalists' always sought to stay within the boundaries of middle class respectability, they certainly incorporated argumentation based upon the female body (Holton 1998).
The most evident examples of this can be found in racist theories.
Female authors attempted to present an image of a superior British race, of which women, had, by necessity, always been part.
Charlotte Carmichael Stopes', in her book British Freewomen of 1894, argued that women's right to political participation originated from the ancient Britons.
'The moral codes, sexual equality in physical height [my italics]' were, in her book presented as arguments for women's suffrage (Holton 1998: 158).
Constitutionalist feminists increasingly began to make use of racial reasoning to support their campaign for the vote (Holton 1998).
This provided the movement with a legal means of enhancing female respectability and high morale in a way that was compatible and in harmony with society.
--- Para SEP ---
However, after forty years of such campaigns, the women's vote was still nearly as far away as it had been at the outset.
This realisation caused the WSPU to seek to pressurise the government, for they were responsible for the problem (Pugh 2000).
--- Para SEP ---
From a harmony model, the dominant suffrage campaigns thence shifted to a model of conflict, bringing the movement into a new phase (Banks 1981, Holton 1998).
The suffragettes, as the WSPU activists came to be known, sought to cut right through to the core of the problem by addressing the government directly.
They sought to point out the inherent contradictions of the political system as it was: the partial inclusion of women into an essentially male-dominated environment (Lawrence 2000).
From insisting on politicians' support in public meetings, the suffragettes soon radicalised (Vicinus 1985).
They felt that the suffrage question was not dealt with seriously, and from there the WSPU leader Christabel Pankhurst set out to phrase the problem more directly: '[i]f Mr Asquith [PM] would not let her vote, she was not prepared to let him talk' (Lawrence 2001).
This meant a great leap away from the Victorian feminist movement; suffragettes sought to replace the passive, homely housewife with a campaigning activist, a political being.
In the words of the prominent suffragette Emmeline Pethick-Lawrence: 'Not the Vote only, but what the Vote means - the moral, the mental, economic, the spiritual enfranchisement of Womanhood, the release of women...' was of vital importance to women (Vicinus 1985: 249).
''',
'''
--- Para SEP ---
A new stage of interrupting meetings started.
Suffragettes continually disrupted parliamentary debates, by shouting out against the speaker.
Increasingly coordinated, the suffragettes were sometimes able to spoil a complete speech, by disrupting the speaker in turns.
In 1908, a speech by Lloyd George was continually interrupted for two hours, with fifty women carried out (Pugh 2000).
Although this obstruction of the political process was arguably playing into the hands anti-suffragists, such forceful, physical practices of politics can be said to have been part of masculine politics as it was conducted by male politicians (Lawrence 2001).
Indeed, when members of the audience could be forcefully removed from a public political meeting, this might be interpreted as a threat to the civil liberties (Pugh 2000).
This meant a moral victory for the suffragettes, especially since gentlemanliness towards women was expected of politicians.
--- Para SEP ---
Mass assault and arrests of women were not an uncommon sight any longer, which the particularly violent events of 'Black Friday' in November 1910 highlighted (Vicinus 1985).
Hence, the suffragettes increasingly highlighted this state brutality by a number of means.
Hunger striking was one of them, first begun on the initiative of Marion Wallace Dunlop in July 1909 (Pugh 2000).
Protesting against the government's refusal to grant the suffragettes the status of political prisoners, the WSPU soon managed to place the campaign for female suffrage on a moral high ground, as the government had to face the issue of the prisoners' treatment.
The WSPU brilliantly publicised this moral strength, speaking of 'moral right triumphing over physical force' (Vicinus 1985).
Forcible feeding of hunger striking suffragettes soon received criticism from doctors (Pugh 2000) and graphic representation presented a shocking picture of the treatment of women in prisons.
The problem continued, and leaving even the parliament divided (John 2001).
In 1913, the Cat and Mouse Act was thus passed, which dismissed the policy of forcible feeding and was aimed at avoiding the negative publicity deriving from it.
To some extent, the act succeeded in doing so, as many people argued that their suffering was 'self-imposed and their martyrdom as in some sense staged' (Harrison 1978: 180).
This loss of sympathy and moral and intellectual high ground was however enhanced by the suffragettes increasing radicalisation and alienation from sympathisers (Pugh 2000).
--- Para SEP ---
The suffragettes showed a radical impatience and determination that eventually led them to virtually abandon any techniques of persuasion (Pugh 2000).
The years in the run towards World War I resulted in the most passionate outbursts of the suffragettes attack upon male domination of the political system (Vicinus 1985).
--- Para SEP ---
A first systematic window breaking campaign had been undertaken in 1909, of which Sylvia Pankhurst said, 'let it be the windows of the Government, not the bodies of women which shall be broken' (Lawrence 2001: 222).
A near concession of the parliament in 1910, which would have resulted in a limited vote for women, could not be passed by a much divided parliament.
Winston Churchill, usually a sympathiser of the women's suffrage cause, voted against the bill in resentment of the WSPU's violent tactics (Harrison 1978).
However, this failure so outraged the rank-and-file WSPU members, that another window breaking campaign was started by the end of 1911 (Vicinus 1985).
--- Para SEP ---
''',
'''
Popular support for the WSPU was on the decline.
The violence of the suffragettes' campaign made it possible to argue that women did not demonstrate the capability to participate in politics due to their rash and unreasonable behaviour.
Criticism of the Union's leaders, Emmeline and Christabel Pankhurst, even enhanced this view.
'They are not calm women,' Teresa Billington-Greig claimed, 'but irritable, yielding to sudden tempest' (Harrison 1978: 176).
And whilst, 'the police, in early stages [...] avoided heavy-handed treatment of the women' (Pugh 2000: 193), they still insisted on continued provocation and imprisonment (Pugh 2000).
--- Para SEP ---
In 1912 Christabel Pankhurst fled to Paris after the police raided the WSPU headquarters in London.
As the movement was losing grip of its popular support, so did Christabel break with some of the high-ranking campaigners, among them the Pethick-Lawrences whom had up until then financed the several branches of the organisation (Pugh 2000).
The publication of the 'The Freewoman', a periodical by a number of suffragettes, between 1911 and 1913, 'reawakened the free love fears that had haunted the feminist movements since its beginning' (Banks 1981: 132).
The Pankhursts, however, had always insisted on dressing in feminine and respectable manner to disarm opponents from criticism (Pugh 2000).
And despite arson, window breaking and other violent behaviour of WSPU movements, the organised anti-suffragist movement was cautiously following the skill with which the suffragettes sought publicity.
As correspondence of October 1913 shows, the anti-suffragist movement was well aware of the press attention that the suffragettes managed to obtain. '[P]ublicity in the Press', an executive committee member wrote, 'is our greatest need and our opponents' chief advantage over us' (Harrison 1978: 175).
It was this sparked public debate, and the attitude of directness and briskness in contrast to the vagueness and eventual moral weakness of the anti-suffragists over the real debate of women's suffrage, that eventually led the parliament to extend the vote to women over thirty.
--- Para SEP ---
The suffragettes' efforts can be said to be characterised by a curious mixture of Victorian respectability and morality blended with a continual and controversial insistence on their rights.
An immense confidence in the female gender to the extent of superior feelings (Vicinus 1985) combined with a strong sisterhood and sense of sacrifice for the cause led Sylvia Pankhurst to burst out even in 1954 against anti-suffragist Violet Markham: 'that foul traitor - who, while the suffragettes were hunger striking, appeared on the Albert Hall platform, [...] protesting against women having the vote' (Harrison 1978: 13).
Resultantly, at all times, the suffragettes used their bodies as a symbol of sovereignty and moral superiority over the established political power, and this to point out the flaws in the anti-suffrage argumentation.
--- Para SEP ---
''',
'''
The requirements for the Space Shuttle were that the system could be justified economically and had to encompass all space business.
This conflict of requirements eventually led to a compromise been formulated.
However, NASA developed a system that is made up of three parts; an Orbiter with three Main Engines, two Solid Rocket Boosters, and an External Tank.
The Space Shuttle has not met any of the original requirements including reliability, maintainability, cost, or safety.
--- Para SEP ---
There is no crew escape system in the Space Shuttle because NASA thought that it was safer than all other spacecraft.
--- Para SEP ---
During the development of the Space Shuttle the three parts of the Space Shuttle system (four if you count the main engines as been separate) were tested separately on the ground.
The overall Space Shuttle System as a whole was only tested in reality on its first flight!
Prior to this the overall system was tested using modelling.
In Figure 1 we see the Orbiter part on one of its 'Approach and Landing Tests'.
These tests proved the Space Shuttle's ability to land safely.
--- Para SEP ---
The first launch was four years behind schedule mostly due to problems with the main engines and heat protection tiles.
Irrespective of this NASA went over budget by 15%.
The Space Shuttle Columbia was the first to launch and before it was destroyed had undergone many developments during its life including second generation main engines and a new 'glass' cockpit.
--- Para SEP ---
On was the 17 th day of the mission, Saturday 1st February 2003 the crew underwent standard procedures to prepare the Space Shuttle for return to the Kennedy Space Centre.
This firstly involved finishing any remaining experiments and storing them safely for the journey back to earth.
The external payload doors and covers were closed.
Next the crew prepared themselves and the Space Shuttle for the de-orbit burn, re-entry and the scheduled landing at the Kennedy Space Centre.
The re-entry program was loaded into the Shuttle's computer system.
The last step of preparation before the de-orbit burn was to orientate the Space Shuttle at the right angle.
--- Para SEP ---
In the Mission Control Centre at 2.30 a.m. the Entry Flight Control Team began duty.
They completed checklists for de-orbit and re-entry.
The weather conditions at the Kennedy Space Centre were analysed by weather forecasters and pilots in the Space Shuttle Training Aircraft.
They concluded that all weather conditions were acceptable for the scheduled landing.
This was 20 minutes before the de-orbit burn was to be started.
--- Para SEP ---
''',
'''
Just after 8.00 a.m. a poll was held in the Mission Control Room for a GO or NO-GO on the de-orbit burn.
At 8.10 a.m. the Shuttle crew were notified the decision was a GO for the de-orbit burn.
--- Para SEP ---
Columbia was flying upside down and tail first 175 miles above the Indian ocean at 8:15:30 a.m. when the de-orbit burn was commenced.
This burn, that slowed Columbia from 17,500 MPH, lasted 2 minutes and 38 seconds.
After the burn the Shuttle was orientated the right way up, nose first ready for re-entry.
--- Para SEP ---
'Entry Interface' (EI) is defined as the point at which the descending Space Shuttle begins to become affected by the Earth's atmosphere.
This is at approximately 400,000 feet above sea level and happened when Columbia was over the Pacific Ocean at 8:44:09 a.m.
(EI+000 seconds).
--- Para SEP ---
When the Space Shuttle entered the atmosphere the impact with air molecules caused heat to be produced.
Over the next six minutes this caused temperatures on the wing leading edge to reach up to 2,500 °F. At EI+270 abnormally high strains were detected on the left wing leading edge spar.
However, this data was stored in the Modular Auxiliary Data System, not transmitted to Mission Control or seen by the crew.
--- Para SEP ---
The next scheduled part of the re-entry flight plan was carried out at EI+323, this was to roll Columbia to the right as part of a banking turn to increase lift and reduce the rate of descent and therefore heating.
The Space Shuttle was travelling at the very fast speed of Mach 24.5 at this time.
--- Para SEP ---
With its speed now reduced to Mach 24.1 at EI+404 the Space Shuttle experienced the highest levels of heating.
This heating started when the Shuttle was around 243,000 feet above ground and lasted for 10 minutes.
At EI+471 when the Shuttle was 300 miles off the West coast of California the temperatures on the leading edge of the wings would have been about 2,650 °F. When it did cross the Californian coast at its peak this temperature would have risen to 2,800 °F. This occurred at EI+557, 231,600 feet, Mach 23.
20 seconds later at EI+577 it was seen from the ground that debris was shedding from the Space Shuttle.
--- Para SEP ---
At EI+6 13 the Maintenance, Mechanical, and Crew Systems (MMACS) officer reported that four hydraulic sensors in the left wing had failed as they were showing a reading below their minimum.
Wing leading edge temperatures would have reached 3,000 °F at EI+683 as Columbia crossed the states Nevada and Utah, before crossing into Arizona at EI+741.
The craft was rolled the other way from a right to left banking turn.
By the time the Shuttle had crossed into New Mexico wing leading edge temperatures would be reduced to 2,880 °F at EI+831.
--- Para SEP ---
In re-entry, superheated air of at least a few thousand degrees Fahrenheit entered the leading edge of the left wing.
This would not normally happen during re-entry, however, on this occasion there was a breach in one of the panels of Reinforced Carbon-Carbon at this point.
These extreme temperatures caused the thin aluminium spar structure of the left wing to melt more and more until the structure was weakened beyond tolerances.
This caused aerodynamic forces to increase, but at this point the on-board computer simply made adjustments by steering to keep the Space Shuttle on its correct flight profile throughout re-entry.
At this time nobody, neither on the ground or the astronauts on board, knew anything was wrong.
This is because the flight data during re-entry is not transmitted to Mission Control.
Instead it is collected and stored in the Space Shuttle.
--- Para SEP ---
''',
'''
The heavily damaged left wing was subjected to increasing aerodynamic forces due to denser atmosphere as it descended lower on its flight-path.
While the Shuttle was travelling at Mach 19.5 a Thermal Protection System Tile was shed at EI+85 1.
At EI+906 pressure readings were lost on both left main landing gear tyres.
--- Para SEP ---
The crew were informed that Mission Control saw the readings, were evaluating them and that they did not understand the last transmission.
When it was above Texas, eventually the wing completely failed and so control of the Space Shuttle was lost.
This loss of control occurred when Columbia was travelling at least 10,000 MPH. The crew responded to Mission Control but it was broken up as the Shuttle disintegrated at EI+969.
--- Para SEP ---
81.7 seconds after launch multiple pieces of hand-crafted insulating foam became separated from the left bipod ramp section of the External Tank.
Video evidence, however, shows only one piece, approximately 24±3" by 15±3" of unknown thickness but described as 'flat', struck the wing.
NASA believed that foam loss can only occur if the foam itself is faulty in the first place.
However, this alone, may not be the reason for foam loss.
The foam that came off and struck the Shuttle during launch is low density, consisting of small hollow cells and has variability across its structure.
There is no known way to assess the foam on the bipod ramps on the external tank physically.
There are several theories why the foam comes off during launch, as it has occurred on numerous previous launches but never caused a problem.
The 'Cryopumping' theory exploits cracks in the foam connected to voids in the foam near the cryogenic tanks in the external tank.
The extremity of low temperature may liquefy the air in these voids which may boil later in the launch as aerodynamic forces heat the foam exterior.
If this continues and the liquid evaporates pressure can build up and cause the foam to break off the external tank.
--- Para SEP ---
The foam impacted the Space Shuttle at a relative speed of 545 MPH since as it fell away it slowed down quickly because it is of low density.
The impact had no effect on the launch or the mission as the Shuttle orbited the earth.
The impact was in the immediate vicinity of the lower half of Reinforced Carbon-Carbon (RCC) panel number 8 and had in fact caused a breach in the Thermal Protection System on the leading edge of the left wing.
--- Para SEP ---
''',
'''
In re-entry, superheated air (~5000 °F) was able to enter through the leading edge insulation due to this breach in the Thermal Protection System.
In the back of each RCC panel there is a slot which in the case of panel number 8, superheated air would have been able to pass through and damage the Thermal Protection System behind it.
It caused the thin aluminium spar structure of the left wing to melt more and more until the structure was burned through.
--- Para SEP ---
From the time sensors appeared to fail because the wiring to them had been destroyed, and the layout of the wiring to these sensors, the following was discovered:
--- Para SEP ---
The breach started in the upper two-thirds of the spar before expanding downwards in the direction of the underneath of the wing.
--- Para SEP ---
By EI+522 this breach was more than 9 inches long.
--- Para SEP ---
The superheated air was now able to enter the mid-wing section.
--- Para SEP ---
Damage did not spread into the wing cavity forward of the left wheel well until at least EI+935.
Therefore the breach was aft of the cross spar.
--- Para SEP ---
For 15 seconds after EI+555 the temperature rose on the fuselage sidewall and left Orbital Manoeuvring System pod.
The latter of these was later found, through hypersonic wind tunnel testing, to be caused by damage to the leading edge of the left wing near RCC panel number 9.
At this point, the superheated air had been in the mid-wing section since EI+487 entering at a temperature up to 8,000 °F. The shape of the wing is supported by aluminium trusses that will have melted at 1,200 °F.
--- Para SEP ---
Although the flight computer was counteracting it, to keep the Shuttle on its pre-planned flight path, it was actually tending to roll to the left due to a loss of lift on the left wing.
At EI+602 this changed to tending to roll right due to increased lift from the left wing.
This is thought to have resulted from high temperatures in the mid-wing section damaging wing skins as well and thereby modifying the overall shape of the wing.
As the RCC panels were damaged further drag also increased and contributed to more left yaw on the Shuttle.
With the wing this badly misshaped the flight control systems counteracted this using aileron trim.
--- Para SEP ---
By EI+727 Mission Control had received temperature rises in hydraulic lines inside the left wheel well.
Again at +790 they received another indication of a problem, this time the tyre pressure sensors indicated quick heating then blow-out of the tyres.
However, they did not know both of these were due to superheated air and the Shuttle could not have recovered from damage inflicted due to the breach when it was this far gone.
--- Para SEP ---
Minutes later, at EI+9 17 the Shuttle experienced large increases in positive roll and negative yaw due to increasing drag and lift from the left wing which for the first time modified the re-entry path.
The heavily damaged left wing was subjected to increasing aerodynamic forces due to denser atmosphere as it descended lower on its flight-path.
The signal was lost just 6 seconds later.
The flight control system tried to counteract this too using yaw jets on the right hand side but it was not enough and control was lost at EI+970 when Columbia was travelling at least 10,000 MPH.
--- Para SEP ---
In the short term, to prevent this accident a number of additional checks should have been carried out before the launch and once the shuttle was in orbit.
--- Para SEP ---
''',
'''
A visual inspection should have been carried out on the leading edge of the left wing either by a camera on a space station or an astronaut conducting a space walk outside of the Space Shuttle.
A rescue mission would have been implemented if experts on the ground looking at these pictures could see the breach in the RCC panel.
It should not be considered that the astronauts could have repaired the Space Shuttle without the proper equipment.
It would have been impossible to foresee the need to include a 'RCC panel repair kit' onboard the Shuttle since they are only used during re-entry.
--- Para SEP ---
The problem of the foam coming off the bipod arm was known about before and had been observed on numerous occasions, even though it had never caused problems before.
They should have had some sort of procedure that allowed for problems like this to be fixed, in case it caused a problem.
The overall design of the system was never supposed to be able to tolerate what happened.
--- Para SEP ---
As the re-entry of Columbia was expected to be a normal re-entry, there was no way for the crew to survive a break up of the Shuttle during re-entry.
I think perhaps that this issue will need addressing in light of this accident so that if it comes to it the crew can escape from any potential accident.
In aircraft this would be in the form of an ejector seat and/or use of parachutes.
There are several technical difficulties associated with implementing this in the Space Shuttle since it needs to protect the crew from the extremes of temperature experienced while in space and during re-entry.
Extra seals and hatches on the exterior of the Space Shuttle would pose as weaknesses.
Therefore I propose that a modification should be made to the Space Shuttles or implemented in a next generation of re-useable space going craft.
This would take the form of a capsule within the craft where all the normal crew fittings and environment are located.
This would be connected to the main body of the craft but able to quickly to break away from it and activate its own parachute system to bring the crew down safely.
This is necessary due to the very small amount of time before the crew or ground control may be aware that the craft is in danger of disintegration.
--- Para SEP ---
Other recommendations relate to the issue of re-using external tanks and other fittings in the launch process.
If foam can come off the external tanks, then by using new tanks for every launch this problem can be avoided in the future.
An alternative to this should be to redesign any parts of the system that are known to have problems starting by accurately defining the specifications and tolerances of all the parts that do not have problems.
Then the system as a whole should be more acceptably safe.
--- Para SEP ---
''',
'''The Law of one price, LOOP, and Purchasing Power Parity theory, PPP, are amongst the oldest and most important economic theories due to their use in theorems attempting to explain Exchange Rate movements.
The relevance and actual evidence of these hypotheses is still the subject of much debate and research.
The initial assumptions for both hypotheses are that there are no barriers to trade and no associated costs.
The models assume no transport costs and perfectly competitive markets.
It also makes the important assumption that arbitrageurs have access to necessary funds with which to trade when opportunities arise.
--- Para SEP ---
LOOP is defined as being: 'When trade is open and costless, identical goods must trade at the same relative prices regardless of where they are sold.'
Gerard Debreu (1959) in "Theory of Value" defined identical goods as those being in identical locations, but here we will treat goods as being identical, if as such regardless of location.
--- Para SEP ---
LOOP:
--- Para SEP ---
The intuition behind the formula is such that if price differences did exist, then arbitrageurs would buy large quantities of the product in the relatively cheaper country and sell it in the more expensive country at a profit.
--- Para SEP ---
Absolute PPP is the point such that the 'Exchange Rate between 2 country's currencies equals the ratios of the country's price levels.' P = eP * The intuition is the same as for LOOP.
--- Para SEP ---
Relative PPP is when the percentage change in exchange rates between 2 country's over any period is equal to the difference between the percentage change in national price levels.
--- Para SEP ---
''',
'''
--- Para SEP ---
Relative PPP is a statement about price changes whereas absolute is about price levels.
--- Para SEP ---
If LOOP holds for every commodity then PPP must hold but LOOP need not hold for PPP validity.
--- Para SEP ---
There are several problems with these hypotheses.
Firstly, there is a problem with absolute PPP which compares national price levels.
Price levels are determined by a sum of weighted average prices of a suitably average basket of goods for that country.
As consumption patterns are very rarely identical between countries and also that the indexes are not compiled in a standardised way, makes comparisons between indexes biased and inaccurate.
For example, Norway will place more weight on the price of whale meat than Italy would as more of it is traded in Norway.
This is why relative PPP is so useful as it measures changes, not actual prices.
--- Para SEP ---
Secondly, the assumption that there are no barriers to trade such as tariffs and that there are no transport costs are unrealistic.
Within the EU and other such economic groups, there are no barriers to trade, but outside of these geographical areas, protectionism is increasing.
This distorts prices and can prevent arbitrage if there are quotas.
There have been several suggested solutions to transport costs.
The first is that output is split into 2 categories: tradable goods, such as raw materials and manufactured goods, for example a car and agricultural products; and nontradeable goods, those goods, for example a haircut where transport costs are so large relative to the cost of producing some goods and services that they can never be traded internationally at a profit.
--- Para SEP ---
An alternative view regarding transport or trade costs is that they may be linearly related to the value of the product, as suggested in the Iceberg model and hence is like an ad valorem tax and is in proportion to the product value.
This would have no impact on relative PPP, but unfortunately, it is rarely the case (see appendix).
--- Para SEP ---
Another problem with the hypotheses are that markets are commonly imperfectly competitive and that firms may price to market, and are price setters as such.
This is a large problem encountered in the manufacturing sector.
--- Para SEP ---
The Balassa Samuelson theory suggests a reason why simple PPP empirical tests may fail and attempts to explain why prices are lower in poorer countries and hence such why LOOP does not hold in some goods.
The model assumes that manufactured tradables have the same price regardless of location (LOOP holds).
It also assumes that poorer countries have poorer technology functions and it take more labour hours to produce one manufactured unit than it does in richer countries.
Since final prices are the same, wages have to be lower in the poorer country.
The wages paid in tradables and non-tradeables will be the same in the country.
Since productivity differences in non-tradeables (such as haircuts) will be negligible, prices of these products will be lower.
--- Para SEP ---
PPP and LOOP have important implications in Open Macroeconomics.
PPP forms a key assumption in theories such as the Monetary Approach to Exchange Rates, which when combined with the Fisher equation, has important implications.
--- Para SEP ---
''',
'''
The Monetary approach assumes perfectly flexible markets and outputs.
It assumes that the Foreign Exchange markets set the Exchange rates such that PPP holds.
Exchange rates are fully determined in the long run by the relative supplies and demands for money such that Md = L(R t ,YUS).
Changes in the interest rates or output only affect exchange rates through their effect on the demand for money.
The monetary approach concludes that in the long run, Exchange rates move in proportion to money supply and also, somewhat against immediate intuition, that an increase in interest rates leads to a long run depreciation in the economy.
These conclusions are derived in the appendices.
--- Para SEP ---
The Fisher equation is such that the real interest rate, R, is equal to the nominal interest rate, r, minus inflation and defined as "all else equal, a rise in a country's expected inflation rate will eventually cause an equal rise in the interest rate that deposits of its currency offer."
Assuming that Uncovered Interest Rate Parity (UIRP) holds as well as PPP, the end result is that of 'PPP in expectations': (see appendices for derivation) This has important implications when trying to test PPP empirically as all the variables are unobservable.
Here I bring in the concept of the real exchange rate, RER, defined algebraically as .
--- Para SEP ---
In the appendices, the concluding result that the RER must equal 1 and cannot change if PPP is to hold is derived.
If foreign prices rise more quickly, it will be exactly offset by a change in the price ratio.
However, the RER may deviate from 1 if there is a change in world output markets.
An increase world relative demand for domestic output would cause the domestic currency to appreciate.
If domestic output increases relative to world output, we would see a long run depreciation of the currency.
Overall, we can say that when there are only monetary effects in the economy, exchange rates obey relative PPP in the long run as set out in the Monetary model.
However, changes in the output market will have an effect which is not in line with PPP.
--- Para SEP ---
The Dornbusch model was an attempt to explain why exchange rates are far more volatile than predicted in the Monetary approach.
It combines the concept of short-term sticky prices with the long-term results of the Monetary approach.
It also contrasts in that it does not assume that PPP holds but does forecast UIRP to hold at all times.
It predicts the exchange rate to make short term deviations from its equilibrium.
Empirically, the model fails badly.
First Generation Currency Crises show how any country with a fixed or pegged exchange rate and that has an increasing money supply (domestically generated) will suffer a currency crisis whereby its foreign exchange reserves become empty.
PPP determines the 'shadow' exchange rate through the monetary approach which will be the exchange rate to replace the fixed regime once it collapses.
--- Para SEP ---
The empirical evidence found in support of LOOP and PPP is rather poor; all versions do badly.
Absolute PPP, as identified earlier, is expected to do poorly empirically due to different goods baskets used across countries to compile their national price levels.
Initial research through the 1970's showed no relationships to support either hypothesis.
Isard's research into LOOP in 1977 found evidence of substantial deviations on the basis of regression analysis for the equation pi * +s = a + bpi + u.
For LOOP to hold, the null hypothesis was such that H
0:a = 0,b =1 but these were not the results he obtained.
Deviations from PPP are predicted in the Dornbusch model due to the price-stickiness in the short term and the monetary approach is a long-term view.
Hence, economists are suffering from an insufficient data period as the deviations may last many years.
Most researchers now believe that the half-life of deviations from PPP are between 3.5 and 5 years depending on the currencies, the price indexes and the sample period which can be tested by Dickey Fuller Unit root tests.
--- Para SEP ---
''',
'''
Michael Mussa came to the conclusion that floating exchange rates lead to much larger and more frequent short run deviations from relative PPP, due to the freedom of capital flows.
A cause of possible LOOP failure identified earlier was that of transport costs.
In the last decade, researchers have found much evidence to support this.
Once the price deviations are higher than the transport costs (arbitrage costs), then prices will revert to the mean, by which the adjustment process is known as the 'Threshold Autoregressive Model.'
This expects a band of transactions costs which result in no adjustments in deviations towards LOOP. One study looked at overcoming the transport cost to see if it was the only variable causing PPP to fail.
The Engel and Rogers study looked at the price volatility for a range of goods in many American and Canadian cities The resulting conclusion was that "The distance between cities explained a significant amount of the variation in the prices of similar goods in different cities, but the variation of the price was much higher for two cities located in different countries than for two equidistant cities in the same country", pointing to a Border Effect.
--- Para SEP ---
In conclusion, LOOP and PPP fail to hold in the short fun but in the very long run, there is some support but with a very slow speed of convergence which would take many years to revert to.
--- Para SEP ---
''',
'''Working memory, the more contemporary term for short-term memory, is conceptualized as an active system for temporarily storing and manipulating information needed in the execution of complex cognitive tasks.
The concept of short term memory store is a store where memories are kept temporarily until the information starts to exceed the short term store capacity and those memories are forgotten and replaced by new ones.
Alternatively, if it is information which is imperative, the short term memory is encoded and transferred into long term memory.
A working memory is composed of three different components: a modality-free central executive, a phonological loop and a visuo-sketchpad.
(Parkin, 1993) Each component is unique and possesses a different function.
The three components can operate relatively independently, so that the articulatory loop and the sketchpad can both hold a limited amount of information without interfering with one another.
However, each component is inter-related and all are required for the functioning of a working memory.
--- Para SEP ---
The phonological loop is a slave system that stores and manipulates auditory information.
(Eysenck and Keane, 1990) It enables a person to remember information in the order in which he was presented it.
The phonological loop is composed of two parts: a passive phonological store and an articulatory process.
Information is processed differently depending on its method of presentation; whether it was presented visually or auditorily.
Auditory presentation of words has direct access to the phonological store, but visual presentation only has indirect access via subvocal articulation.
Information presented in an auditory form is processed by the phonological store.
The phonological store is a memory store that can retain speech-based (phonological) information for a short period of time.
Unless rehearsed, the information tends to fade and be forgotten within about 2 seconds.
The second component is the articulatory control process, which is responsible for two different functions: it translates visual information into a speech-based code and deposits it in the phonological store; and it enables information to be retained in the memory store.
--- Para SEP ---
There was numerous research done to back up this theory on the phonological loop.
An experiment carried out by Baddeley, Thomson and Buchanan (1975), as cited in Baddeley (1999), found a connection between word length and memory span.
It was discovered that memory of shorter words is significantly higher than memory of longer words.
This suggests that capacity of the phonological loop is determined by the temporal duration and that memory span is determined by the rate of rehearsal.
This supports the idea of rehearsal by the phonological store.
As "longer words" are longer than "shorter words", it would take longer to rehearse these words compared to "shorter words" and therefore the memory of shorter words was more successful.
--- Para SEP ---
They further tested the theory on the phonological loop by articulatory suppression: requiring the subjects to generate repetitive irrelevant speech and therefore preventing subvocal rehearsal.
Subjects when under suppression were unable to transfer visually presented information to the phonological short-term store.
As a result of this, the acoustic similarity effect and irrelevant speech effect was removed.
As the subjects were unable to subvocalise the information the information being visually presented to them, the information was not translated into a phonological code.
The information was therefore not registered in the store and any irrelevant speech did not cause any disruption.
It was concluded that a process of subvocal rehearsal is necessary to refresh a fading memory trace before it decays and the information is lost forever.
Subvocal rehearsal also includes subvocal speech and can be disrupted by irrelevant spoken material.
This provides evidentiary support for the theory behind the phonological loop.
--- Para SEP ---
''',
'''
Another component of the working memory is the visuo-spatial sketchpad.
The visuo-spatial sketchpad is concerned with temporary storage and manipulation of spatial and visual information.
Logie (1995), as cited in Eysenck and Keane (1990),
--- Para SEP ---
argued that it can be subdivided into two different components: the visual cache and the inner scribe.
The visual cache is concerned with the storage of visual form and color.
The inner scribe handles spatial and movement information.
Information is rehearsed in the visual cache by the inner scribe.
The inner scribe also rehearses and enables transfers of information from the visual cache to the central executive.
It is also instrumental in the planning and execution of bodily movements.
The visuo-spatial sketchpad is mainly concerned with visuo-spatial manipulations such as geographical orientation.
(Eysenck and Keane, 1990)
--- Para SEP ---
Evidence for Logie's theory was provided in a study carried out by Beschin, Cocchini, Della Sala and Logie (1997) as cited in Eysenck and Keane (1990).
The subject was a man, NL, who had suffered from a stroke.
NL had difficulty describing details from the left side of scenes in visual imagery.
However, he found it easy to perceive the left sides of scenes which indicated that his visual perception system was intact.
It was reported that NL performed badly on tasks that required use of the visuo-spatial sketchpad unless there was some form of physical stimulus or a drawing.
It was concluded that as NL had suffered damage to the visual cache, he was only able to create mental representations of scenes and objects.
The use of stimulus was very helpful as it enables him to use intact visual perception skills to compensate for the redundant visual cache.
--- Para SEP ---
At the crux of a working memory, is the central executive which is the most significant and essential component.
It is assumed to be a limited-capacity attentional system that controls the phonological loop and sketch pad, and related them to long term memory.
(Baddeley, 1999) Baddeley (1996) as cited in Eysenck and Keane (1990), has identified four major functions of the central executive: switching of retrieval plans, timesharing in dual-task studies, selective exclusive attention to specific stimuli and temporary activation of long term memory.
--- Para SEP ---
Baddeley has used tasks with random generation of letters or digits to study the central executive.
A study by Baddeley (1996) was cited in Eysenck and Keane (1990), in which participants had to hold between one and eight digits in short term memory while trying to generate a random sequence of digits.
It was discovered that as the number of digits increased, the harder it was to produce random sequences.
This proves the theory that close attention is needed to avoid producing non-random sequences.
Further research was carried out which involved participants pressing numbered keys along with random digit generilsation.
Some of the experiments were also done in combination with reciting the alphabets, counting from 1 or alternating numbers and alphabets.
The results showed that randomness was reduced by the alternation task suggesting that rapid switching of retrieval plans is a central executive function.
--- Para SEP ---
''',
'''
Baddeley also studied the role of time sharing and attention distribution amongst two tasks.
(Eysenck and Keane, 1990) A study was done on Alzheimer's patients who suffer from progressive loss of mental powers.
The patients participated in a dual-task study involving digit-span tasks combined with the task of placing a cross in each of a series of boxes arranged in an irregular pattern.
The results showed that the control group showed no reduction in digital span performance in the dual task condition.
However, the Alzheimer's patients showed a marked reduction in performance.
This highlights the view that Alzheimer's patients have difficulties distributing attention between two tasks.
--- Para SEP ---
A working memory is like a company run by a director and the people under him.
In this case, the "director" is the central executive and "the people under him" are the two slave systems: phonological loop and visuo-spatial sketchpad.
The phonological loop is made up of a passive phonological store directly concerned with speech perception; and an articulatory process linked to speech production that gives access to the phonological store.
The theory on the phonological loop was studied with the use of articulatory suppression which is the prevention of subvocal rehearsal by generating repetitive irrelevant speech.
Results showed that it is necessary for subvocal rehearsal to occur in order to refresh a fading memory trace before it decays and the information is lost forever.
The visuo-spatial sketchpad is composed of the visual cache which is a visual form and color store and an inner scribe which is concerned with spatial and movement information, allows transfer of information from visual cache to the central executive and is involved in the planning and executing of body movements.
Evidence consistent with this theory was acquired from an experiment done on a stroke patient.
It was reported that it was necessary for some form of physical stimulus or a drawing as he had difficulty performing tasks that required the use of the visuo-spatial.
It was concluded that the damage the patient had suffered had damaged the visual cache and as a result he was only able to create mental representations of scenes and objects.
The central executive system controls the other two slave systems.
It has four major functions: switching of retrieval plans, timesharing in dual-task studies, selective exclusive attention to specific stimuli and temporary activation of long term memory.
This theory was tested in an experiment on Alzheimer's patients and it was discovered that they have difficulties distributing attention between two tasks due to their brain damage.
Another experiment involving random digit generilisation and alphabets clearly depicted that rapid switching of retrieval plans is a function of the central executive.
--- Para SEP ---
All three components have a limited capacity and are relatively independent of the other components.
However, it must be noted that all three are required in order to form a working memory.
--- Para SEP ---
''',
'''The aim of this experiment was to determine whether we could create visual search slopes that were consistent with the feature integration theory.
We conducted an experiment using a computer program that measured participant's reaction times for identifying if a target object was present or absent.
We found that on the whole when the target is easy to find an increase of distracters does not affect reaction time, when it is hard to find though an increase in distracters does affect reaction time.
This shows that integration theory can be suggested to be correct.
--- Para SEP ---
Looking for an object is an everyday task of life, having to search for the location of relevant visual information.
However the search for the relevant object can sometimes be hampered by the presence of irrelevant objects.
The relevant object is known as the target, whilst the irrelevant objects are known as non-targets or distracters.
Examples of this are attempting to identify a friend who is amongst a large group of people, or trying to identify a specific book on a bookshelf.
The task is made harder as there are more non-targets than there are targets.
Some search tasks though are easier than others though, if the relevant visual information has something distinct or specific about it.
Such as shape, size or colour as the target therefore is distinguishable from the distracters around it.
An example being, if the relevant object was a red book on a bookshelf full of green books.
Or to find a friend in the crowd if they have a distinct haircut.
If the target though is very similar to the distracters then the task is going to be much harder.
--- Para SEP ---
The process of visual search has been suggested to have two main stages as stated by Triesman and Gelade in 1980.
Known as Feature Integration Theory.
Firstly feature maps are filled in, these each contain a specific piece of information such as colour, shape and size.
These feature maps though provide no conscious information about the location of the features or any other features at the same location.
Triesman and Gelade suggest at this point the maps are free floating.
For an individual to recognise an object, activity from corresponding locations within each feature map must be combined.
This combination requires a second stage, as an object representation is formed.
In this stage attention is turned to a master map of locations.
Which connects all the feature maps and identifies the location of objects present in the individual's visual field.
The integration of all this information at one single location means the representation of a single object with all its features is produced.
--- Para SEP ---
In this experiment we wanted to investigate whether distracters affect the time taken to locate a target object.
The hypothesis was that when the visual field only contained one single feature then having distracters will not increase the time taken to identify the target object.
However this would not be true when there was a conjunction visual field.
When the target object was not in the visual field during the conjunction condition, it would also take longer to identify this.
This would be because when the target object is absent all the feature maps need to be integrated.
When the target object is present then usually only half the feature maps need to be integrated before the object can be identified.
This is known as the serial self-terminating search theory, so there should be a 2:1 ratio between the absent and present search slopes.
--- Para SEP ---
''',
'''
This was a within-participants design, the Dependant variable was the reaction time.
The independent variables were whether the target was absent or present and which condition was being used.
--- Para SEP ---
This experiment involved 105 1 st year undergraduate psychology students at University.
The participants were in 4 separate groups who did the same experiment during the same week.
They were informed the experiment was voluntary and that they could leave at any time.
--- Para SEP ---
The experiment will be conducted using a computer program that asks you to identify whether a target object is present or absent.
All participants used the same computer program and given the same instructions as to how the run it.
--- Para SEP ---
Participants were first asked to familiarise themselves with running the computer program.
They were asked to do a practice block of trials for all three conditions.
The first condition was a single feature search task where the target was defined by a unique colour (SFC).
The second, a single feature task where the target is defined by a unique shape (SFS).
The final condition was a conjunction search task where the target is only uniquely defined by a combination of both shape and colour (CJ).
Within the conditions each search display will contain either 4, 8 or 16 display items and the target can either be present or absent.
Each combination of target absent or present and display size will be repeated 20 times.
As the computer program was a stimulus presentation program, participants would press the 'z' and 'm' keys on the keyboard to indicate whether the target was present or absent.
Approximately half the group will use the 'z' key to indicate present response and key 'm' for absent responses and for the other half this will be reversed.
The computer will then process the data, creating a set of mean correct reaction times.
--- Para SEP ---
The results show that our hypothesis was generally supported.
The first condition, a single feature search task where the target was defined by colour, there is no increase in reaction (search) time.
The search slope has remained flat which is what we expected, the dependant variable (reaction time) was not affected by the independent variable of whether the target was absent or present.
However in the second condition which was also a single feature search task where the target was identified by shape, the search slope did not remain flat.
The reaction time increased when the target was absent, this is not what we expected.
In the third condition, a conjunction search task where the target was defined by a combination of both shape and colour.
We got results that we expected, reaction times were longer in general and reaction time was around double for when the target was absent compared to when it was present.
--- Para SEP ---
''',
'''
The search slope for the first condition, which was a single feature search with the target being defined by colour, was consistent with feature integration theory.
As the feature maps are filled in, the map concerning colour is not changed.
Therefore the feature map concerning a distracting colour can be ignored.
So only a few of the feature maps need to be integrated for the target object to be produced in the visual field.
The results from the second condition, in which the single search target was defined by shape, are not consistent with the feature integration theory.
As the reaction time is greater when the target object is absent.
Only certain feature maps should be needed to be integrated, so they reaction time should not be affected by whether the target was present or absent.
The number of feature maps needed to be searched should still be few as there was only the one difference between the target object and the distracters.
For the conjunction condition, the hypothesis stated that the reaction time will increase as there are more distracters.
Whilst the serial self-terminating search hypothesis states that the search slopes for the absent and present conditions would have a 2:1 ratio.
As when the target object was absent all the feature maps need to explored and integrated.
As you cannot be sure of the target being absent unless all the maps have been integrated.
When the target was present usually only half the feature maps need to be searched, meaning there should be a 2:1 ratio.
Our results show that the absent slope on the conjunction condition is much steeper than the present slope.
With there being an almost 2:1 ratio of 15.932 to 27.248 concerning display size, which is recorded as the time taken to identify whether the target object was present or absent.
--- Para SEP ---
We also expected the reaction time to increase as the number of distracters in the visual field increased.
As the relationship between reaction time and the number of distracters is a measure of the difficulty of the task.
On the single search conditions reaction time should not have increased due to an increase of distracters, as the target was easy to find.
However when the target was hard to find such as in the conjunction condition, then increasing the number of distracters will increase the time taken to determine if the target is present or absent.
Our results are generally consistent with this theory.
--- Para SEP ---
''',
'''What is slavery?
According to Malinowski it is "...the denial of all biological freedom except in the self interest not of the organism but of its master. [...] The slave does not enjoy the protection of the law.
His economic behaviour is not determined by profits and advantages.
He cannot mate according to his choice.
He remains outside the law of parentage and kinship.
Even his conscience is not his own."
The picture is very grim and entirely inhumane, the slave is totally dependable on his master's generosity and kindness.
However despite this black description, slavery in Brazil was abolished only in 1888 while the British had outlawed it already in 1806.
Why did it take so long for Brazil to finally introduce manumission?
Was it due to the fact that a different type of slavery was implemented which turned to be more "humane"?
Was the Brazilian colonial society so tyrannical and despotic that slave resistance was not powerful enough to influence the change?
What factors enabled the continuation of slavery up until 1888?
--- Para SEP ---
Numerous historians and researchers have closely examined the problems of assessing the severity of slavery in Brazil.
Some such as Chasteen, Levine, and Schwartz are fairly critical of the oppressive and harsh treatment of slaves, while others Foner, Freyre and Tannenbaum seem to paint "Brazil as a veritable haven for blacks."
The publications of these historians tend to portray conflicting images of the lives of the slaves, however this arises from the multitude of factors and the intense complexity of the structure of society at the time.
Nonetheless the situation of the slaves was not as brutal and heartless as Malinowski's quote suggests, the Brazilian colonial society which included the Catholic Church and the Master class did try to "humanize" the oppressions of slavery.
However the degree of the mitigation is debateable.
--- Para SEP ---
The roots of Brazilian slavery trace back to the extensive sugar plantations in the beginning of the 16 th century.
After being dormant for 30 years since discovery of Brazil in 1500 by Pedro Alvares Cabral, the Portuguese crown finally became interested in its colony.
The new settlers realised the economic value of sugar production and unsuccessfully tried to persuade the Indians to labour for them.
The Indians viewed working on plantations to be designed for females and were very ill adapted to performing the task both psychologically and physiologically.
They were accustomed to work as freelancers and did not like the notion of exhausting work schedules.
In addition to that, the natives suffered from the numerous diseases such as smallpox and measles, which had been transmitted from the Europeans who were more resistant to them.
The consequence of this was a dramatic increase of the mortality rate amongst the Indian population, which was followed by famine.
This in turn stimulated an economic crisis for the Portuguese and so to prevent the collapse of the sugar economy, which required a grand workforce, Indians were officially taken as slaves and forced to work on the plantations.
However the demand for labour was too great and so Negroes had to be supplied from Angola and used as slaves.
--- Para SEP ---
''',
'''
Slavery was not a new idea to the Europeans especially the Portuguese it was a common feature in most modern European states.
However the Church could never "proscribe slavery as unconditionally immoral as it functioned in the society of men" and admitted the economic necessity of them in Brazil.
The Catholic Doctrine did not oppose slavery as such since the master and slave were equal in the sight of God, but there was a distinct difference between the Churches treatment of the Indians and the Negroes.
The Jesuits arrived in Brazil as they were to evangelise the Indian pagans, they protected them from exploitation and "were tutelary and paternalist" in their relations with them.
They created specially designed villages or aldeias, which entirely reorganised the Indian society and transformed them into a working force on the missionaries fields and plantations, much to the uproar of the local settlers.
Consequently the natives were used as labour but did not suffer the hardships they would have encountered on normal plantations.
However as the Indians were extremely recalcitrant and their population decreased whilst the Negro population increased, the Indians were regarded as exotic but the Negro was the main source of labour.
--- Para SEP ---
There has been a great deal of turmoil regarding the punishment and brutal, savage behaviour towards the Negro slaves.
Not only were they abducted by force from Angola, but also the transport conditions were so appalling that about 20% of the slaves perished while crossing the Atlantic.
They starved and thirst in the insanely cramped conditions, epidemics were frequent, but this environment broke the slaves will enabling subjugation.
The harshness of life in Brazil and the foul living conditions forced the slaves to resist, however they paid dearly for impertinence.
There are numerous accounts of barbaric treatment of slave's punishments that ranged from flogging to castration, novenas or breaking on the wheel.
It was unsurprising that the slaves resisted, rebelled and consequently runaway forming quilombos, the most famous being Palmares, which was eventually destroyed by the Portuguese forces in 1694.
--- Para SEP ---
It seems evident that throughout the 16 th and 17 th century the situation of the slaves depended on the generosity of the master as settlers frequently ignored the royal legislation and continually practiced resgate which involved capturing Indians and selling them to plantations.
The presence of the bandieras, which were bands of 200 men who tracked and hunted down Indians, later to be used as slaves also ridiculed the legal system.
This ignited fierce opposition from the Church yet they were powerless to challenge the vast Brazilian coast and aid every persecuted victim.
Technically the Portuguese crown had established an attorney-general under whose jurisdiction came all matters relating to the treatment of slaves and fines were placed upon those who neglected them, however the efficiency of this charges are unknown.
--- Para SEP ---
''',
'''
A historian, Philip Curtain, has estimated that at least 9 million Africans were shipped across the Atlantic between 1502 and 1870, from which one third were destined for Brazil, making it Latin America's largest single country to import so many slaves.
The consequences of this are essential and are the origins of present days multiracial culture in Brazil.
By the end of the 16 th century the blacks formulated 70% of the Brazilian workforce, which also meant that the blacks racially dominated Brazilian society.
By 1715 the ratio between blacks and whites was 3 to 1 and by 1819 less than 20% of the entire Brazilian was purely white.
Due to the interracial relationships established between the three dominant races: blacks, whites and Indians a new demographic dimension of intermediate ethnicity was created.
Marvin Harris claims, "forty different racial types were now elicited" such as the mulattoes, mameclucos and cafusos.
In addition to this the decrease of the pressure placed on the sugar plantations and the change towards coffee plantations and mining in the 17 th and 18 th century resulted that the relations between the master and slave became heavily personalised.
It seems that masters slowly came to realise that through the improvement of slave working and living conditions, they were more effective in working and resisted less.
But on the other hand some viewed slavery in economic terms and so resorted to the basic theory of slave management: "work them hard, make a profit, buy another."
It was anticipated that there was an annual loss of 10% on the slaves, but this resulted from the unhealthy living conditions and the overworking of slaves.
However by the 19 th century the masters realised that it was more productive for a worker to be free as he had more incentive to labour hence more than half the labour force was emancipated.
--- Para SEP ---
Tannenbaum claimed the Portuguese slave laws specifically preserved the human identity of the slave as he could marry freely, seek another master, own a plot of land and had the right to buy his own freedom.
The Jesuits in 1750 managed to convince the Portuguese crown to issue a decree, which declared that all Indians were born free and could only be enslaved if they practiced cannibalism, or were captive in a "just war".
In 1784 another decree stated the prohibition of branding Negro slaves.
However there was "no policeman" to implement this law, even the priests and clergy could not enforce it, they could only obtain information about the treatment of slaves through actually visiting the plantations.
This was only introduced in 1789 with the Real Instruccion, which reflected the humanitarian and protective approach of the Church.
However they could not reprimand the master of the plantation besides it was difficult "to apply legal restraints to the planter's use of the lash".
--- Para SEP ---
''',
'''
David Brian Davis argues that slaves had technically 72 hours of leisure a week and Stanley Elkins claims that a master was "obliged to give liberty to his slaves on all Sundays and holidays which totalled 85 in year". In addition to that there were no legal bars to marriage, education and freedom, which was true to some extent, however the life of a slave rested in the senor de ingenio's hands.
--- Para SEP ---
Tannenbaum maintains that the master owned the man's labour but not the man, this may sound farfetched but by law and custom the slave could retain his produce and cultivate it on his plot, which later became his savings.
This was the basis of self-purchase and throughout the 18 th was very popular; the coartacion was developed in 1871 and enabled payments for freedom in instalments.
There were various ways of obtaining freedom but the master normally rewarded hardworking slaves with manumission.
The old, the sick and the children of the owner of the plantation, even if they were illegitimate, could be let free.
If a plantation fell into economic problems, the master would sell manumission at a low price, maintaining a slave was an expensive enterprise.
The Church also managed to emancipate slaves as children if they were baptised then it was the godfather's responsibility to liberate them and in most cases the master was the godfather.
--- Para SEP ---
The living conditions of the slaves depended significantly on where they worked and in which part of Brazil.
The coast was more comfortable than working in the mainland, whilst mining for diamonds and gold, the coast was also characterised by urban settlements.
Life in the cities was more pleasant and less harsh than in the rural areas due to the greater power and importance of the legal system, which in cases aided the slaves.
Rural areas were either haciendas or engenho plantations, where the structure of society was divided into the mill owner, lavradores and slave.
One vital element that harassed entire Brazil population was the food shortages, but the masters soon realised that by giving plots of land to slaves they could develop subsistence farming which was an answer to the problem.
Historians constantly debate about the social relations between races, Tannenbaum claims that there was social, economic and cultural prejudice but no racial as such.
It was the type of employment one did that represented his social status and not Lockhart and Schwartz point out the colour of the skin that mattered.
--- Para SEP ---
Evidently the social structure of the Brazilian slaves is immensely complex and it is not possible to make a definitive, universal statement about the conditions of slavery life.
There are masses of exceptions, various interpretations and consequently contradicting statements.
The Church tried to mitigate the oppressive harshness of slavery through the use of Jesuits and Franciscans in their establishment of missions, this didn't come without a cost, as the Indian society was transformed in order to avoid exploitation by settlers.
The Catholic Doctrine was not unanimously against the slave trade and the different approach to the Indians and Negroes resulted in prejudice and racial discrimination.
The legislature was implemented but not everyone followed, as there was little incentive and actual authority to ratify and protect that oppressed.
The Brazilian colonial society continued to use slave labour till 1888 and was one of the last countries to become rid of it.
Slavery had been part of the Brazilian way of life but the fact that efforts were made to reduce its impact and by the 1880s most of slave population had already been set free, therefore Brazilian colonial society did not dent the humanity of slaves.
--- Para SEP ---
''',
'''The purpose of this paper is to discuss whether the alignment of business with information technology (IT) could be a possible way to gain a decisive competitive advantage on organizational performance.
The paper examines on the possibilities and necessities of alignment by evaluating sources, ideas and researches of different authors and provides an insight on the diversity of prior research on alignment.
Such a deduction provides firms with a platform on several contingency factors of alignment where they position within their respective industries.
--- Para SEP ---
The fervent debate concerning strategic alignment of business and IT is under discussion for many years.
Businesses are investing decades of time and billions of dollars to struggle for achieving competitive advantage by using information systems.
However, as Luftman and Oldach (1996) state, organizations seem hard to position itself and identify their long term benefits by harnessing the capabilities of IT. This 'competitive advantage' paradox has been both generally supported (Venkatraman 1989, Niederman et al 1991, Earl, 1993, Boynton et al 1996, Davidson 1996) and condemned (Carr 2003) in the literature.
As a consequence, there is no precise, commonly agreed notion of preposition of alignment as well as the contribution of information technology to the success of organizations.
--- Para SEP ---
Business-IT alignment is defined as a process of 'applying IT in an appropriate and timely way and in harmony with business strategies, goals, and needs'.
(Brier and Luftman 1999) In this paper, applying business-IT alignment as a mean to strive for competitive advantage in dynamic marketplace as the underlying theory (Adcock et al 1983, Cardinali 1992, Faltermayer 1994), possible antecedents of alignment are analyzed to give insights on the extent in which IT contributes to the business success.
--- Para SEP ---
The importance of business-IT alignment is widespread in IS research literature.
Papp (1995) indicates that this concept has documented since the late 1970s (McLean and Soden 1977, IBM 1981, Parket and Benson 1988, Mills 1986, Brancheau and Wetherbe 1987).
Alignment addresses a coherent goal across different departments with IT perspectives.
Such cohesive organization goal and IT strategies enable better leveraging the business - IT partnership.
This harmony can be extended and applied to help the organization in identifying new opportunities.
(Papp 1999)
--- Para SEP ---
Brier and Luftman (1999) point out that the traditional methods for planning the business strategies have not taken full advantage of IT. That is one of the main reasons why organizations fail to realize the underlined potential of their IT investments (Barclay et al 1997).
Brier and Luftman (1999) adopt and modify Henderson and Venkatraman s' (1989) strategic alignment model (Fig.
1), which in concert with their enablers (enabling activities) / inhibitors (inhibiting activities) research from 1992 to 1997 consecutively.
--- Para SEP ---
Brier and Luftman (1999) interviewed the executives and obtained the data from consultants' engagements and eventually identified six most important enablers and inhibitors for business-IT alignment.
They argue that by maximizing alignment enablers and minimizing inhibitors through a six-step approach, strategic alignment of business with IT can be achieved.
--- Para SEP ---
Brier and Luftman s' (1999) study on enablers and inhibitor based on the strategic alignment model (Henderson and Venkatraman 1989) which help us to understand the relationship between the organization/IT process and strategies required in organization.
This is a leading principle both for research (Barclay et al 1997) and for practical (Luftman and Oldach 1996, Brier et al 1999) purposes.
However, the model presumes that management is always taking full control of what the situation is and clearly understanding what is going on, meanwhile, information infrastructure can deliberately be aligned with emerging management insights (Ciborra and Hanseth 1998, Ciborra 1998, Maes 1999, Galliers and Newell 2003).
As Earl (1996) states, it takes a considerable time and effort to examine and investigate the processes and applications in the organization.
This can be done in a future plan, but it is not an immediate panacea.
--- Para SEP ---
Also, this model might become ineffective particularly in the rapid changing environment because flexibilities can be gained by allowing certain misalignment within the organization (Ives and Jarvenpaa 1994).
For example, paper-based fax machines are still widely used in the organization as people can check and read the fax documents in his own time, though video-conferencing technology can greatly reduce the processing time for communication and provide streamline business process.
What's more, according to Maes's (1999 p.5) study, a 'lack of balance' (the existence of non-alignment) between business and IT is often a source of innovation and success in the organization.
--- Para SEP ---
Social dimensions are another aspect being neglected in the model.
This lack of attention resulted in a misinterpretation of alignment - solely integrating business and IT s' strategies and infrastructures (Benbasat and Reich 1998), whereas ignoring the impacts of 'organizational learning'.
(Ciborra 1998)
--- Para SEP ---
Brier and Luftman (1999) develop the ideas of alignment enablers and inhibitors rested on several antecedent assumptions, as summarized here.
--- Para SEP ---
A successful IT track record in department tends to improve its relationships with other business units (Earl 1996).
Benbasat and Reich (2000) argue that the communication between business and IT executives can be enhanced by the degree of IT successful implementation.
Brier and Luftman (1999) find that lack of IT track record - 'IT fails to meet its commitments', ranked third in the list of inhibitors, contributes to the failure of alignment.
They presume: Successful IT History facilitated the business - IT alignment.
--- Para SEP ---
Prior researches on strategic business IT alignment highlight the importance of knowledge sharing between business and IT executives (Carrico and Johnston 1988, Venkatraman 1989, Gurbaxani et al 2000).
This importance of knowledge management and sharing mechanism within organization - 'IT understands the business', ranks third in the top six enablers of business IT alignment (Brier and Luftman 1999).
Therefore, the assumption of the study is: Knowledge sharing between business and IT executives enhances the business IT alignment.
--- Para SEP ---
Strategic business IT planning has been widely accepted as a core Brier and Luftman (1999) tool of managing IT resources and business strategy (Ives and Jarvenpaa 1993). conclude business IT planning in his findings as second most important factors both in enablers - 'IT involved in strategy development' and inhibitors - 'IT does not prioritize well'.
Thus, the second assumption here is: Well-defined and comprehensive strategic planning process promotes the alignment of business and IT.
--- Para SEP ---
Successful relationship management between Business and IT executives plays an important role both in planning and implementing the IT projects.
(Earl and Feeny 1997 Feeny and Ross 2000) As Brier and Luftman (1999) state, 'senior executive support for IT' and 'IT/business lack of close relationships', the most important enablers and inhibitors respectively, are critical to the successful implementation of business - IT alignment.
They assume that: Active business and IT executives' relationship management is positive related to business IT alignment.
--- Para SEP ---
This paper examines the assumption of these four factors, three of which - successful IT history, knowledge sharing between business and IT executives and strategic business IT planning - are directly based on prior empirical research on alignment (Kirs and Sabherwal 1994, Benbasat and Reich 1996, Benbasat and Reich 2000, Cragg et al 2002, Chan et al 2006).
The effect of the forth factor - Business and IT executives' relationship management has been empirically examined by the antecedents discussing the evolving role of CIO and IT executives.
(Earl and Feeny 1997, Feeny and Ross 2000)
--- Para SEP ---
Rather than simply developing a theory-based, deductive study of alignment, Brier and Luftman (1999) employ an interpretive, data-driven approach to examine the critical factors on alignment.
Research covers 1,051 business and IT executives over 500 Fortune 1,000 US organizations attended seminars addressing business IT alignment.
Attendees are assisted to assess the contribution of IT and identify their role in the organization.
--- Para SEP ---
Even the most comprehensive researches are not able to capture the whole picture of the multidimensional world.
(Mingers 2001) Luftman and Brier s' (1999) study adopts generally a single positivist approach which might often gain only limited views of the particular research situation.
For example, the findings might be limited to the attendees to be measured or quantified.
Also, according to Habermas's (1979, 1984, 1987, 1993) theory of communicative action and adopting by Mingers (2001), there are three worlds relevant to research methods, namely the 'Material World', 'our Social World' and 'my Personal World'.
Each domain has its specific mode and relationship to the research.
In Luftman and Brier s' research, individuals' subjective meanings might be amplified and thus neglecting the more general social and material context in the alignment.
A pluralist methodology combining several different paradigms is suggested to enrich the research results (Mingers 2001) and as a consequence, getting a more reliable picture of the research situation.
--- Para SEP ---
The study examines several large organizations in US in both private and public sectors (Brier and Luftman 1999).
However, it is questionable whether these findings (enablers and inhibitors of alignment) are still valid to different sizes of organization in different nations.
For example, small and medium-sized enterprises (SME) tend to use centralized structure to coordinate their working units.
Therefore, 'IT resources shared' - ranked eleventh in enablers, might be more significant to the business - IT alignment.
--- Para SEP ---
Even the fervent adherent in the context of business IT alignment cannot deny that the frameworks and concepts developed is not at all unequivocal.
Various dimensions of strategic alignment, such as the degree of alignment against productivity, are amphibious.
It creates rooms to explore whether all organizations are well-served and benefited equally by allocating scarce resources to improve alignment and whether the adoption of particular business strategy or industry influences the extent to alignment matters.
--- Para SEP ---
Not surprisingly, severe critique has been given on the difficulties and necessities of business IT alignment (Keen 1996, Ciborra 1998, Ciborra and Hanseth 1998).
Carr's (2003) theory on the role of IT is well known in the literature.
He claims that as IT's power and availability have expanded, strategic value of IT investment decreased.
IT is shifting from a potential source of competitive capabilities to just a cost of doing business.
It is essential, but not strategic resource in the organization.
Therefore, IT and business alignment do not matter anymore as IT can no longer provide the organizations with competitive advantages.
In his opinion, the key of managing IT and business units is not seeking advantages of alignment, but is defensively cautioned for the cost and risks in IT investment.
--- Para SEP ---
Several possible antecedents of alignment deal with business - IT alignment as illusory, even inexpedient (Ciborra 1998, Maes 1999).
Business developments are not solely depended on IT development.
Even the rigid installation of IT infrastructure might be confined by the industry standards or political requirements.
That's what Arthur (1988, 1994) called self-reinforcing mechanisms to the organization in economics.
Chan (2002) argues that total business and IT alignment is complex and difficult to achieve.
Earl (1996) further states that alignment is hard to achieve unless an understanding and shared vision within the organization, from top managers to front line staff.
However, objectives are not always fully appreciated down the line in an organization, where series of decisions are taken by various level of management.
For examples, details of the hardware, software and operation platforms might be reflected in an emphasis on cutting costs rather than adding value by the line management.
Besides, many authors (Coakley et al 1996 and Ciborra 1998) question the measurability of the degree of business IT alignment.
As alignment is a continuous process that requires monitoring over a long period of time and handling contingencies if necessary, difficulties on evaluating and measuring its effectiveness remain a major obstacle to alignment.
--- Para SEP ---
Nicholas Carr's (2003) discussion on the role of IT has been widely examined.
His source of idea based on the argument that IT - carries digital information just as railroads carry goods and power grids carry electricity, has become merely commoditized product and no longer confers competitive advantages, and thus contribute to the unnecessary to any alignment with business.
His theory of commoditization of IT, with electricity and railroad analogy, however, do have their limitations and constraints.
(Brown et al 2003) IT systems are not analogous as standard electricity or the railway gauges, rather than any confinement or standardization, the continuous improvement in processing power and performance have had a multiplicative effect coming together, leading to an extension to its reach to other areas like biological organisms and RFID. Furthermore, IT brings about new practices and possibilities for the organization to create and compete in the marketplace (Brown et al 2003).
--- Para SEP ---
Although the prior studies on business - IT alignment has been helpful in general, many previous researches have ignored the notion of context dependency employed in the real world (Goedvolk et al 2000).
This paper thus examines and evaluates the antecedents of alignment and provides insights into why prior research on business - IT alignment may have reported diverse and sometimes conflicting findings.
In fact, different organizational structure (both formal and informal), social dimensions, industries and business practices are likely to apply to different approaches and degrees of alignment (even it is misaligned) (Brown and Magill 1998, Ciborra 1998).
In other words, implications of alignment (or misalignment) depend on various contingency factors, like organizational size, marketplace, social and political concerns, internal and external relationship, industry or strategy.
--- Para SEP ---
The influence of contingency factors on business - IT alignment can be illustrated in the well established typology of business strategy, including Prospector, Defender, Analyzer and Reactor (Miles and Snow 1978, Chan and Sabherwal 2001).
Prospectors are those who seek for new product opportunities and emphasis on flexibility and innovation.
They usually engage in dynamic marketplace and efficiency - oriented operations.
Defenders desire for stability and cost containment.
They function in a predictable environment with a mechanistic organization structure.
Analyzers concentrate on pursuing the flexibility and efficiency simultaneously.
They employ a matrix structure to achieve innovation while maintaining the economies of scale to their core products.
Reactors are excluded here as they employ an unconscious strategy, according to Pearce and Zahra (1990).
--- Para SEP ---
Chan and Sabherwal s' (2001) study reveals an insight where different business approaches contribute to different degree of alignment.
For examples, in mining industry, organization strategy tends to be defensive as the market environment is largely predictable (Defender).
It is easier for them to achieve alignment but lesser advantages provided (Chan et al 2006), perhaps due to the fact that they do not have an urgent need compared to the Prospectors and Analyzers (Keen 1996).
--- Para SEP ---
To conclude, for IT managers, this paper provides a platform to consider on various approaches of alignment.
However, towards a contingency approach, this suggests that future study should be more market, structural or strategy specific in order to gain a more reliable result in a given research situation.
--- Para SEP ---
''',
'''Nowadays, teenage smoking is a common issue for most countries worldwide which draws upon a lot of concern.
As tobacco use has been identified as a major preventable cause of premature death and illness.
Each year about 440,000 people die in the United States from illnesses related to cigarette smoking and a great further number of deaths are attributable to second hand smoke.
Smoking initiation usually occurs during adolescence, while the vast majority of smoking related deaths occur in middle aged and elderly people.
Therefore prevention of smoking initiation among adolescents is a powerful strategy for keeping away much of the illness associated with tobacco use.
To target for a right intervention control, it is important to understand primarily of the associated risk/protective factors in terms of influencing teenager's choice of smoking uptake towards to which also form the basis of this empirical research. Results showed that peer influence determines the strongest relationship for an adolescent to become a smoker.
--- Para SEP ---
Research on the factors associated with youth smoking has been based on the following areas:
--- Para SEP ---
1) Socio demographics (individual's gender, age, races, income and accessibility of tobaccos): Historically, prevalence of smoking was higher for male than female (Surgeon General 1994).
However recent trends show there's an equal rate between the two genders as female smoking prevalence is increasing.
Reasons probably include factors of increasing weight concern among females (French, 1994).
Despite of gender indifference, Winkleby MA 1993 identifies high school students are more susceptible to smoking behavior than middle school students as it is usually associated with the critical age in a youth's development.
It supported the statistics in USA that higher smoking rate among teenagers at/ above14 than teenagers than below this age and higher smoking prevalence was found among people of American Indian origin. Another variable has been found to correlate with adolescent smoking is related to their income, young people with more spending money showed higher levels of smokin (W.Schlenger, 2003).
Towards easy accessibility of tobacco products will also increase the onset of smoking.
(Difranza JR, 1996).
--- Para SEP ---
2) Environmental factors (peer influence, living people influence and parental attitude): Adolescents with close friends smoking may be more susceptible to smoking due to the direct pressure among their friends and a desire for approval among their social group (Kimberly, 2003).
Having living members smoking at home may also increase individual risk (Shelli, 2003).
Further study has investigated a number of related factors associated with parental attitudes showing that strongly parental disapproval of smoking was inversely correlated with adolescent smoking.
(Biglan A, 1995)
--- Para SEP ---
3) Behavioral variables (academic performance and aspirations): Smoking status has been found to be consistently related to school performance, educational aspirations and commitment to school.
Students who committed to schooling, doing well in academics are less likely to smoke than those who do not (Tyas S, Pederson L 1998)
--- Para SEP ---
4) Community factors (anti tobacco advertising/ discussion of dangers of tobacco use among in school): A research study done by Sherry Emery in 1999 shows that anti tobacco adverts on television which identifies consequences of tobacco use are associated with more favorable anti smoking attitudes.
Also according to B S Flynn 1992, discussions of dangers of tobacco use in school can have a positive effect on increasing teenager resistance towards smoking behavior
--- Para SEP ---
Therefore after identifying the associated risk/protective factors towards affecting a teenage smoking uptake status, in the following section I will carry out an empirical regression analysis on investigating the significance of these explanatory factors.
The data I've selected to use will be based on National Youth Tobacco Survey in USA 2004 conducted by the Cancer diseases centre since CDC is one of the 13 major operating centre of the Department of Health and Human Services (HHS) which is globally recognized for conducting health research and investigations. The survey consists of 27933 observations; the sample target is on US middle school (grade 6-8) and high school (grade 9-12) students who are basically from 4 ethnicity regions (white, Hispanic, Asian, African American).
The reason why I have chosen to pick for year 2004 dataset to estimate as the research in that year was conducted in a larger sampling set and it only consisted of small amount of missing datas compared to some other datas of recent years.
However it is not without limitation since this questionnaire was only available in English.
So comprehension maybe limited to some ethical participants whose first language is not English.
This may one of the main potential bias in the data result.
--- Para SEP ---
In this section, the dependant variable we are going to test is the likelihood of the adolescent whether or not he/she is going be a smoker.
Explanatory variables include examining into how factors such as adolescent's gender, age, ethnicity, friends influence, parental attitude, living people influence, their interest in schooling, amount of income earned, effect of anti smoking advertisement and school discussion of danger of tobacco use will independently correlated with the dependant variable.
The choice of these explanatory variables will allow us to compare the effect of how biological/genetics factors, social environment, individual characteristics, commodity price, control intervention each can increase or decrease the probability for a adolescent to take up smoking.
The test of dependent variables will also be binary in nature.
Therefore before turning to estimate the regression, it is necessary to generalize all the dependant and independent variables correctly into dummies variables from the raw dataset. Since both the dependent and independent variables are dummy variables, therefore OLS is not the best estimation methodology for a non linear regression.
A better approach to estimate this regression is to run the binary choice and taking the probit regression approach.
--- Para SEP ---
Before trying to interpret the significance of the factors, it is necessary to check if there is multi-collinearity in the model as even though it will not affect the coefficient output, it will inflate the standard errors thus bias the z statistics.
--- Para SEP ---
Multi-collinearity problem existed between two variables living people smoke and parents influence.
As the correlation shown in the table between them is 0.078890 which is bigger than the correlation shown between the independent and dependant variable when we looked into the relationship between parents influence and smoker which is only 0.03920.
To solve the problem, I will choose to drop the variable of parental influence and estimate the model again.
--- Para SEP ---
The regression model:
--- Para SEP ---
--- Para SEP ---
(Smoke/Non Smoke)= Constant+ Age +Female+ American Indian + Peer smoke+ Living people smoke+ loss in interest in schooling + high income+ parental strict attitude + discussion of danger of tobacco use+ anti smoking advert+ Error term
--- Para SEP ---
For interpreting the results, firstly we will use F statistics to test for the overall significance of the joint dependant variables
--- Para SEP ---
Ho: β1=β2=β3=β4=β5=β6=β7=β8=β9=0
--- Para SEP ---
H1: β1,β2, β3, β4, β5, β6, β7, β8, β9 not equal to 0
--- Para SEP ---
LR statistics showed in the table referred to that all slope coefficients except the constant are zero which gives overall significance of the joint dependant variables in related to the independent variable and will be compared with the F critical value.
--- Para SEP ---
F critical value= no of restrictions, no of observations- no of parameters-1
--- Para SEP ---
F=9, 27933-9-1
--- Para SEP ---
In the F statistics table we set the 5% significance level.
Number of degree freedom is ∞ and number of parameters is 9, so the critical value we found in the F statistic table is 1.88.
The LR statistics =5014.869
--- Para SEP ---
We can conclude that null hypothesis can be rejected.
All or some of the parameters are significant towards the independent variable.
--- Para SEP ---
Afterwards we will try to use the table to interpret for each independent variable's relative significance towards the dependant variable.
--- Para SEP ---
Z statistics critical value will be based on 5% significance level in a normal distribution lying below-1.96 or above 1.96 for null hypothesis (H0) to be rejected and accept alternative hypothesis (H1) in which we have sufficient evidence to conclude that the explanatory variable is significant.
Value lying within -1.96 to 1.96 suggested that we don't have enough evidence to reject null hypothesis, hence the variable is proved to be insignificant.
--- Para SEP ---
Critical P value must be less than 5% for the variable to be significant.
--- Para SEP ---
Below we will examine the magnitude for each of the significant variables
--- Para SEP ---
Overall, strongest influence which affects a teenage smoking uptake is among of friends influence.
One close friend smoke will increase the risk of individual to become a smoker by 84%.
Other results show that with one living people smoking at home will increase the individual susceptible to smoking risk by 39%.
3 other variables found for those whose age (between 14-21) or who are being considered as having a loss interested in schooling (missed school more than 5 days in a month) or who have a high income ( having more than 20 dollars a week) are each equally having about 34% chance of likely impact upon an individual to become a smoker.
Weakly significant results found for two protective factors which are anti-smoking advert and school discussion of danger of tobacco use as it only tend to show of having about 14% and 6% respectively on reducing the probability of the teenage to be a smoker.
--- Para SEP ---
Hence after testing each of the significance of these variables, we are going to look into the predictive power of the model which is how well the modelling fit the actual data.
The conventionally computed R^2 for measuring goodness of fit is of limited meaning in the dichotomous response models.
As the independent variables can only be two binary numbers either Y is equal to 0 or 1.
All the values of Y will all lie on X axis corresponds to 0 or on the Y axis corresponds to 1.
It's meaningless to look for how well it will fit the model in regarding to what linear regression has used.
Instead Eviews presented one better measure of goodness of fit for binary regression model which is the Mcfadden R^2 also ranges between 0 to 1.
The more related to 1 the higher the accuracy of the model.
In our model the Mcfadden R^2=0.255204 this maybe because generalizing raw data is normally hard to obtain high accuracy and there's some missing observations.
However in binary regression models, goodness of fit is not of primary importance.
What matters are the expected signs of the regression coefficients and their statistical and /or practical significance.
We will decide to take an analysis into the expectation prediction test table.
--- Para SEP ---
To take a look into the upper table first, we will try to compare the estimated equation with the actual constant probability.
We will set 0.5 as the success probability and probability lower than 0.5 will consider as a weak or unsuccessful probability.
For the first two columns, Dep=0 refers to the teenager who is a non-smoker and Dep=1 refers to the teenager who is a smoker.
'Correct 'classification for a teenager being a non-smoker equals to the probability less than or equal C for dep=0 or the prediction for the teenager to be a smoker equals to the probability bigger than C for dep=1.
In this model, we termed it as correctly predicted dep=0 as sensitively and correctly predicted dep=1 as specificity.
Overall we found that the model correctly predicted number of non smokers as 20664 (accuracy rate is 97.75%) and number of smokers as 524 (accuracy rate is 15.14%).
The move from the right hand side table of constant probability to the left of the estimated equation provides an overall predictability of the estimated model.
In the constant probability it correctly predicts all the non smoking teenagers of dep=0 since it is 100% but incorrectly predicted all of dep=1 which is among teenagers who smoke.
The total gain from the expected model improves the overall dep=1 by 15.14% while it worsens the predicted probability of dep=0 by 2.25%.
Overall the estimated equation correctly predicts 5.02% better than the constant probability.
The percent gain for the estimated equation is 1.42% better predict the outcome than the constant probability of 85.93%.
--- Para SEP ---
The half bottom part of the table will be the compute expected number of y=0 and y=1 observations in the sample.
It shows that the expected number of teenagers who is likely to be non smokers is 18782.89 and the expected number of teenagers who is likely to be smokers is 1103.23.
The total gain is about 5.02% and 20.75% gain over in the predictability than the constant probability model.
We can conclude that the probit model is a better predicted estimated measured model.
--- Para SEP ---
Finally to add into additional monitoring of the effectiveness of this model we run the goodness of fit test by Andrews and Hosmer- Lemeshow.
--- Para SEP ---
We try to measure the H-L value, null hypothesis is that deviations between the expectations and actual observations are zero which means the model predicts perfectly.
Rejection of the hypothesis referred that the models predicts poorly since the expectations and actual observations are actually derived.
--- Para SEP ---
Chi squared critical region=10-2, 5% significance level =8, 0.05 = 15.50731
--- Para SEP ---
H-L statistics from the table= 12.2649 <15.50731
--- Para SEP ---
P-value=0.1398>0.05
--- Para SEP ---
Andrew statistics=15.4119<15.50731
--- Para SEP ---
P-value= 0.1177>0.05
--- Para SEP ---
Since both of the statistics show that they are below the critical value and the p-value are both greater than 0.05, we can accept the null hypothesis which means that the expectations and actual observations will not derive, the model fits closely to the actual data at an acceptable level.
--- Para SEP ---
Towards the primary finding from our result, it turns out peer influence has the strongest risk impact on teenage smoking uptake.
With living people who smoke is associated with the second most significant susceptible risk.
Teenage who have a poorer academic orientation, do not process an interest in schooling are likely to be the third significant factor towards for smoking behaviour.
Having a higher income is associated as the fourth potential risk.
However the two protective factors anti tobacco smoking advert, discussion of dangers of tobacco use in schooling are only shown to be weakly significant and only have a small effect on reducing teenage smoking uptake.
--- Para SEP ---
As a result policy implications may suggest that control tobacco strategies should be simultaneously working along with each other in order to generate a larger effect.
Comprehensive interventions should placed upon on school education programs included helping students to identify the dangers of tobacco use, teaching for self control and refusal skills against negative influences .
However the positive effects of these programmes are most tend to be short run and it will only be sustained when it is coordinated with community efforts such as promoting a healthy living environment at home, reducing accessibility for teenage among tobacco use, enforcing a stricter parental attitude among their children.
Together with broad based community efforts in which individual negative attitudes and behaviors are targeted for change, continue promoting media interventions to convey anti tobacco smoking messages to teenagers, increasing prices for tobaccos can then actually led to a more substantial long term success in reduce youth smoking.
--- Para SEP ---
From the result found, the target group should be mostly for high school than middle school students.
Other than age, two other factors such as gender and races the teenage belong to are not significant towards to have a relationship with the probability of the teenager's smoking uptake.
The former confound to what recent literatures have found whilst the latter is hard to conclude as statistics shown that American Indian have higher smoking rate than other races.
Therefore we may suspect that there are factors other than genetics that affected this social group to associate with a higher smoking rate or it maybe associated with data errors that actually occurred to bias the result.
Therefore improvement over the model towards future work should include to test for time series regression to check for the persistence significance/insignificance of the explanatory variables Since given limited amount of time for data collection, some of the variables have not been included, such as how the accessibility of tobacco correlates with individual smoking uptake, it is greatly recommended to be added into future research.
It can be further enhanced if the reciprocal relationships between those significant risk/protective factors can be explored, all of which have important implications for policy researchers in developing for more effective youth tobacco intervention programmes in the future and tailoring to those who are most vulnerable to the risk.
--- Para SEP ---
''',
'''Jessica is 14 and lives with her mum and sister, Emma who is 16.
Jessica and Emma have always been close, and even more so when their parents separated two years ago.
They always looked very similar, so much so that when they were younger people often mistook them for twins.
But, these sisters were actually very different.
--- Para SEP ---
But, Jessica feels that she is living in her sister's shadow and she no where near as good in any way.
At school all of her male friends say how much they fancy Emma and other girls are jealous, secretly wanted to be her.
Jessica wishes that she was as popular as her sister and secretly wanted to be her too.
--- Para SEP ---
Jessica is much shorter than Emma was at that age and doesn't share the same sporty figure.
Her skin is very pale and her hair has always been frizzy.
--- Para SEP ---
Emma had always been good at every subject and is one of the best at sports in the school.
Jessica is bright but has never excelled greatly in any subject area, and on the sports field is awkward and clumsy.
Jessica often feels that her parents prefer Emma and feels jealous when they tell other people how well Emma is doing.
--- Para SEP ---
Jessica has always tried to be like her sister, but wants to find some way of showing people that she is just as special.
Emma is so confident and although Jessica is not shy, the confidence she shows is always part of an act that she puts on to hide her real feelings.
--- Para SEP ---
One thing that Jessica does well in is drama.
She hasn't auditioned before for any of the school productions because she had been too shy.
However, in drama classes she finds that she is more confident and, in fact, very good.
She been to see lots of shows before and had always dreamed secretly that she up on stage too.
--- Para SEP ---
Jessica's parents do not consider drama to be a very good route to follow and insisted that she did not pursue the subject at GCSE. Jessica is desperate to find a way to prove how talented she is and show that there is something that she, and not Emma, is good at.
--- Para SEP ---
'''
]