Monday, September 30, 2019

Strain Controlled Triaxial Test- Geotechnical Engineering

1. INTRODUCTION From civil engineering view, Soil is the medium through which the structural loads are transferred safely and efficiently. Soil should be consistent enough to satisfy the requirements even under inevitable circumstances like earthquake, bomb reactions etc. It is necessary to incorporate the seismic effects into the soil properties. Like concrete or steel, engineering properties of soil cannot be found out using theory of classical dynamics and vibrations. It can be found only field and lab tests. To quench the above requirement, various techniques are employed nowadays. The most common methods are cyclic simple shear, cyclic triaxial shear and cyclic torsional shear tests. The dynamic triaxial test is the most effective method to arrive the static and dynamic properties of soil like cyclic deformation, damping ratio, liquefaction strength etc. Though it has some limitations, it is widely used for the analysis of soil under seismic forces. The fundamental parameters obtained from this test are cyclic shear stress and cyclic shear strain, through which the soil is defined. The tests can be done either by stress controlled (cyclic shear stress) or strain controlled (cyclic shear strain). The test setups are highly sophisticated and costly. It needs highly skilled labour. The measuring devices used in the system needs to be calibrated and sealed properly as it is more sensitive to disturbances. The results obtained reflect the site seismic condition to the maximum level provided the strain level is kept minimum. Fig 1. 1 Triaxial Cell Fig 1. 2. A typical Cyclic triaxial apparatus 1. 1WHY DYNAMIC TRIAXIAL The Dynamic forces are time dependent and are usually cyclic in nature i. e. they involve several cycles of loading, unloading and reloading. Earthquake is three dimensional in nature. Hence the shear waves and body waves produced by the earthquake tend to deform the soil in all the directions (for the horizontal level ground). Dynamic Triaxial tests actually reflect the soil condition (in all round stresses) in the site. During earthquakes, the seismic waves cause the loose sand to contract and thereby increasing the pore water pressure. Under undrained loading, development of high pore pressure results in upward flow of water, thereby making the sand in liquefied condition. Pore water pressure is measured effectively in triaxial tests. Among the stress-control and strain-control condition, strain control is adopted widely. This is because; stress-control test has great sensitivity to the sample disturbance. In case of strain-control, pore pressure developed during tests is less affected by specimen fabric and density. The tests can be done on intact specimens and reconstituted specimens. While comparing the results obtained from intact and reconstituted specimens, there is much deviation in stress-control compared to strain-control. (tests done by vucetic and dobry, in 1988). Stress path control is used in the study of path dependence of soil behaviour. Stress deformation and strength characteristics depend on initial static stress field, initial void ratio, pulsating stress level and the frequency of loading. 1. 2APPLICATIONS There are variety of engineering problems which rely heavily on the behaviour of soils under dynamic conditions. These includes design and the remediation Of machine foundation, geotechnical earthquake engineering, protection against construction vibration, non-destructive characterization of the subsurface, design of offshore structures, screening of rail and traffic induced vibrations, vibration isolation etc. When it comes to dynamic triaxial test, the wide range of application is the liquefaction behaviour of soil under seismic forces. 2. HISTORY One of the first pieces of equipment designed to test cyclic triaxial loading was the pendulum loading apparatus by Casagrande and Shannon in 1949. This utilizes the energy of the a pendulum which when released from a selected height, strikes a spring connected to the piston rod of a hydraulic cylinder, this cylinder is further connected to another cylinder located above the cel. The time of loading was between 0. 05 and 0. 01 sec. Fig. 2. 1. Pendulum Loading Apparatus Casagrande and Shannon came up with an equipment called Falling Beam Apparatus as shown in Fig. 2. 2 In 1960, Sead and fead used Pneumatic System for cyclic loading. It marks the evolution of the dynamic triaxial shear apparatus. Fig. 2. 2. Falling Beam Apparatus 3. PRINCIPLE First attempt was made by Seed and Lee (1966) by consolidating a saturated sample under a confining pressure and subjected to constant amplitude cyclic axial stress under undrained conditions. This test was performed till they deformed to a certain amount of peak axial strain. Under this condition creates a stress conditions on a plane of 45 ° through the sample which is the same as those produced on the horizontal plane in the ground during earthquakes. This is the basis on which the cyclic triaxial test works. Fig. 3. 1. Simulation of geostatic and cyclic stress in triaxial test. Shear stress is taken into account as it causes deformation. To incorporate seismic effects, uniform shear stress for a given cycle is adopted for non-uniform stress time data. To achieve that a maximum shear stress is multiplied by a correction factor ?. Then the test is carried out till required deformation or failure to occur. 4. EQUIPMENT 4. 1. Parts of Dynamic triaxial apparatus suggested by ASTM D 3999 – 91(2003) APPARATUSPURPOSECONSIDERATION 1. Triaxial Pressure CellTo mount sample and conduct testTolerance for piston, top platen & low friction piston seal. Ball bearings and friction sealTo minimise frictionFriction can be,  ±2 % of the maximum single amplitude cyclic load Load rodTo facilitate loadingdia = 1/6th of specimen dia Specimen cap & BaseTo provide a sealed platform Rigid, non corrosive, impermeable, Cap weight < 0. 5% of applied axial failure load (static), Valves To regulate back pressure, cell pressure, pore water pressureLeak-proof, withstand applied pressure Top and bottom platensTo facilitate loading and provide a rigid baseProper alignment, load rod sealed with top platen with friction seal. 2. Cyclic Loading EquipmentTo induce cyclic loads Uniform sine wave @ 0. 1 to 2 Hz, simple ram or a closed loop electro hydraulic system 3. Recording EquipmentsTo record the data obtainedProperly calibrated Load MeasurementTo measure the cyclic loadsElectrical, analog or digital Axial deformation MeasurementTo measure the strain rateLVDT or dial guages Pressure ControlTo regulate cell pressureMercury or pneumatic device Pore Pressure transducerTo measure pore pressureTransducers or electronic pressure meters Volume change MeasurementTo check the volume change in the specimenCalibrated and widely used guages 3. Miscellaneous a. Rubber membrane b. Filter paper To hold the specimen To facilitate saturation Leak-proof with minimum restraint Must not cover more than 50% of the specimen. Fig. 4. 1. Schematic Diagram of a stain-controlled dynamic triaxial test 4. 2WORKING PROCEDURE The working mechanism mainly involves three phases a)Saturation phase:Initially the sand is sample saturated by applying cell and back pressure simultaneously. (cell pressure > back pressure) b)Consolidation phase: during test, void ratio should be kept constant. It is obtained in this phase. Back pressure valve is closed. )Load Phase: Actual test begins here. Strain rate is fixed using gear system. Cyclic load is applied either using hydraulic or pneumatic type. Loads and corresponding strains are recorded at loading, unloading and reloading. Test is continued until the required strain or failure occurs. 5. RESULTS From the cyclic triaxial test, we can obtain various graphs for detailed analysis, †¢Load Vs Deform ation †¢Deviatoric Stress Vs Time †¢Axial Strain (%) Vs Time †¢Excess Pore Pressure Vs Axial Strain (%) †¢Excess Pore Pressure Vs Time †¢Deviatoric Stress Vs Axial Strain (%) Fig. 5. 1. Axial load Vs. axial deformation From the hysteresis loop obtained, the dynamic Young’s modulus (Ed) can be calculated, from which shear modulus (G) can be calculated using poisson’s ratio ( µ). Damping factor (D) can also be calculated from the loop obtained. Shear Modulus, G = Ed / 2(1+ µ) Damping factor,D = Ai / 4? At Ai ? Area of Loop At ? Area of shaded portion 6. Discussions: Two series of undrained cyclic triaxial strain controlled tests were performed by Mladen Vucetic and Richardo Dobry, on two different Imperial Valley, California, silty sands which liquefied during an earthquake in 1981. Both intact and reconstituted specimens were tested. The cyclic shear strain is the fundamental parameter governing pore pressure buildup. The saturated deposit is composed of two layers: an upper, looser, sandy silt unit located between 2. 6 m and 3. 5 m depth, containing more fines (37%) (sand A), and the lower, loose to medium-dense sand unit located between 3. 5 m and 6. 8 m, containing less fines (25%) and (sand B). Selected plots of normalized cyclic shear stress, ? cy* = ? cy/? c„ and normalized residual pore pressure, u* = u/? c, versus number of uniform strain cycles, nc, up to nc = 30, are shown in Figs. 6. 1 and 4 for sands A and B, respectively, ? y above is the amplitude of cyclic shear stress acting on 45 ° planes within the specimen, with ? cy= ? dc/2 , where ? dc is the cyclic deviatoric stress amplitude, and u is the accumulated residual cychc pore pressure at the end of the pertinent strain cycle, derived from measurements at the point of the cycle at which the cyclic stress ? dc = ? cy = 0. Fig. 6. 1 Comparis on of results obtained on intact and reconstituted specimens of sand A The effect of sand fabric, that is, the difference between results obtained on reconstituted and intact specimens, is analyzed next for both sands A and B, with the help of Figs. . 1 and 6. 2. It can be readily noticed in these two figures that the residual pore pressures in cyclic triaxial strain-controlled tests are practically unaffected by the change of sand fabric (u* versus nc curves), while, on the contrary, soil stiffness is significantly affected (? cy* versus nc curves). This is especially noticeable in Fig. 6. 2. Fig. 6. 2 Comparison of results obtained on intact and reconstituted specimens of B. Fig. 6. 3 Residual pore pressure in reconstituted specimens of sands A and sand B It must also be noticed that the range of cyclic shear stresses measured at a given cyclic strain in Figs. . 1 and 6. 2, for the two sands and for the two types of specimen fabric, is quite wide, in contrast to the corresponding range of pore pressures in Fig. 6. 3, which is very narrow. This confirms once again that cyclic shear strain is the fundamental parameter governing pore pressure buildup, and that use of strain-controlled testing represents the most appropriate, as well as the most convenient, approach currently available for evaluation of seismic pore pressures and liquefaction of level ground sites. 7. FACTORS AFFECTING CYCLIC STRENGTH Effect of Confining Stress Critical void ratio is not a constant but decreases as confining pressure increases. The stress ratio decreases with increasing confining pressure. Effect of Loading Wave Form As the load data obtained from history are converted into uniform cycle by ?. The order of increasing strength was rectangular, triangular and sine Effects of Frequency on Cyclic Strength The frequency effects have only a minor (< 10 percent) effect on cyclic strength of the soils. The slower loading frequency have slightly higher strength. Effects of Relative Density At relative densities < 50%, complete liquefaction occurred almost simultaneously, and relative densities above 70% were required for safety against large strains. Effects of size & Gradation Well-graded material was somewhat weaker than uniformly graded material. This finding was attributed to a greater densification tendency in well-graded soils, as finer particles move into voids between larger particles, than occurs in uniformly graded soils. This densification tendency causes increased pore pressure. Effects of sampling on strain history Once a specimen has liquefied and reconsolidated to a denser structure, despite this densification, the specimen is much weaker to cyclic stresses reapplied. Effects of Over consolidation Ratio and Ko The maximum deviator stress required to cause a critical strain for a specified number of cycle’s increases with the Ko ratio. Also the cyclic strength increases as OCR and fines content increase. 8. VALIDATION The validation of the apparatus is done by successive tests, researcher’s experience and available equipments. Mladen vucetic and richardo dobry conducted two series (Intact and Reconstituted Specimens) of undrained cyclic triaxial tests on Imperial Valley, California, silty sands which liquefied during an earthquake in 1981. The results were compared and the experimental set up was validated. Further the tests were conducted on different types of sand and validated. 9. DEVELOPMENTS Since 1966, there has been a considerable improvements in the triaxial testing apparatus meeting results of higher accuracy and efficiency. Initially stress controlled methods were used, then strain controlled methods were adopted. To apply loads, initially hydraulic jack was used, then pneumatic system was used and then electro piezometer. Likewise there are so many advancements of triaxial tests. Some of the advancements are discussed below. Chan (1981), and Li et al (1988) Fig. 7. 1. , have developed a popular electro-pneumatic apparatus which incorporates many advancements in apparatus design and operation. Fig. 9. 1. Electro-pneumatic Apparatus Automated Cyclic Triaxial system is the next development, which is the most comonly used apparatus. It is well known for its automated input and output System, data acquisition and quick results. Fig. 9. 2. Automated Triaxial System 9. 1 RECENT ADVANCEMENTS GDS Entry level Dynamic triaxial testing system ?Technical Specifications ?Maximum Operating Frequency: 5Hz ?Minimum Operating Frequency: < 0. 001Hz ?Highly accurate dynamic, electro-mechanical actuator ?Available sample sizes (depending on cell selection): 38 x 76mm (or ? 39. 1 x 78. 2mm) to ? 150 x 300mm. Fig. 9. 3. GDS ELD ? 16-Bit dynamic data logging ?16 Bit dynamic actuator control channel ?Cell pressure range to 2MPa (dependent of cell choice) ?Small laboratory foot print No hydraulic power pack required ?Standard Triaxial cells can be used (upgraded to dynamic seals and bearings) ? Can be upgraded to perform P and S wave bender element testing. ?Can be upgraded to perform unsaturated triaxial testing with the addition of the following items: a)Unsaturated pedestal with high air entry porous stone. b)1000cc digital air Pressure/volume controller (ADVDPC) for the applicati on of pore air pressure and measurement of air volume change c)Optional HKUST double cell (available in the data sheet ‘Unsaturated Triaxial Testing of Soil (UNSAT). As well as dynamic triaxial tests, the ELDyn system can be utilised to carry out traditional triaxial tests such as UU, CU and CD as well as more advanced tests such as stress paths, K0 and Resilient Modulus tests. HS28. 610 cyclic triaxial test system is also a sophisticated apparatus available in Newdelhi (India). DYNATRIAX is another advanced cyclic triaxial equipment available at many places Los Angeles, Poland and many countries. It can operate at a maximum frequency of 10Hz. 10. CONCLUSION Many innovative systems for cyclic loading of soil have emerged in geotechnical engineering. Each system has its unique advantages and limitations. Some ways of minimizing these limitations have been pointed out. The advanced equipments are an additional tool for performing cyclic loading, in particular liquefaction testing. Extreme care must be used in preparing remoulded sand specimens, and special attention must be paid to testing techniques in order to obtain reproducible test results. In particular, the method of specimen preparation, the shape of the loading wave form, and the preciseness of density determinations greatly affect cyclic strength. Hence, development of ASTM standards for cyclic triaxial testing should include consideration of these factors in the results of this investigation. 11. REFERENCES: ASTM D 3999 Determination of Modulus and Damping Properties of Soils Using the Cyclic Triaxial Apparatus Advanced triaxial testing of soil and rock – Page 484 by Robert T. Donaghe, Ronald C. Chaney, Marshall L. Silver Chan, C. K. , 1981, â€Å"An Electropneumatic Cyclic Loading System,† Geotechnical Testing Journal, ASTM, Vol. 4, No. 4, pp. 183-187. Dynamic Geotechnical Testing H Ronald J. Ebelhar, Vincent P. Drnevich, and Bruce L. Kutter. STP 1213 ASTM Publication Dynamic Geotechnical Testing : a symposium by Marshall L. Silver Khosla, V. K. and Singh, R. D. , â€Å"Apparatus for Cyclic Stress Path Testing,† Geoteehnical Testing Journal, GTJODJ, Vol. 6, No. 4, Dec. 1983, pp. t65-172. Fundamental of Soil Dynamics and Earthquake Engineering By Prasad. Soil Liquefaction, a critical state approach by Mike Jefferies & Ken Been Kramer, Steven L. , Geotechnical Earthquake Engineering, Prentice-Hall, Inc. , Upper Saddle River, NJ, 1996 Townsend, F. C, â€Å"A Review of Factors Affecting Cyclic Triaxial Teste,† Dynamic Geotechnical Testing, ASTM STP 654, American Society for Testing and Materials, 1978, pp. 356-383.

Sunday, September 29, 2019

How Are Dreams Proved to Be Futile in of Mice and Men Essay

Dreams in â€Å"Of Mice and Men† is influenced under the poem â€Å"To A Mouse† by Robert Burns and the relationship between the poem and the novel is seen through the build-up to the characters hopes and dreams at the time of the great inflation and how they struggled to keep up with their ambitions. The context in both texts clearly portrays the death of the future plans the working class keep to at that time and the writers do this to illustrate the chances of normal people succeeding and how being born into a hierarchy means that you’re destined to a class in society. In Of Mice and Men, Lennie is introduced with a â€Å"shapeless face† and animal imagery is used to signify his strength, â€Å"bear drags his paws†, this portrayal of Lennie sets him apart from George in the hierarchy. As the story develops the readers understanding of George’s and Lennie’s relationship does to, the reader realises that the theme that keeps both the key protagonists motivated is the dream. This is further developed when Steinbeck introduces the dream for the first time, â€Å"I remember about the rabbits, George†, it is clear to the readers that Lennie is academically weak and in order for him to remember about the dream indicated how much it means to him and it’s possibly the thing that matters to him most. However early in the novel Steinbeck uses animal imagery to foreshadow the death of Lennie and the death of the dream, â€Å"shoot you for a coyote†, the author highlights his vulnerability and his death in the future to suggest that his weakness academically is what possibly lead him to his death. In the beginning of the novel George gets into a quarrel with Lennie about ketchup, â€Å"we ain’t got any†, during George’s rant he clearly emphasizes on what he sees as the American dream in comparison to what they both see. George leads on to imply that Lenny is a road block to his dream and this is partially true as it is what Lennie did towards the end of the novel that killed the chances of the dream. George’s dream can be considered as a typical working class dream as it isn’t very promising and has no future outlook. The death of the dream in Of Mice and Men seems to be blames on a certain individual, the death of George and Lennie’s dream is blamed on Lennie and later in the novel we learn that the death of Curley’s wife’s is because of her â€Å"ol lady†. In the novel Curley’s wife is portrayed as a social outcast alongside the â€Å"nigga† but this time because of her gender as they lived in a sexist society. However beneath her make up her interior reveals her dreams and how they were crushed as well, the reader also finds out that her sexual weapon is to grab the attention of the ranch workers as no one gave her the recognition she wanted. Curley’s wife’s dream is fully revealed towards the end of the novel with her explain it to Lennie, she clearly illustrates her very independent dream however it is also clear that she is very dependent on men when it comes to making the dream a reality. This maybe the reason why her dream was locked away and only brought out when she needed it to emotionally look back at it; considering that she lived in a men’s society it means that women are held back from what they wanted to do and were expected become a housewife. This is the main road block that Curley’s wife comes across making her dream futile. Throughout the novel the reader realises that the characters that we’re too eager for their dream (Lennie and Curley’s wife) reaches their destiny, quite dramatically, with their death. It seems that both characters had something in common – lack of power, the protagonists had a lack of power meaning that they were vulnerable to society however Curley’s wife attempted to cover it by putting on a lot of makeup but it is clear that your weakness will eventually go against you. Furthermore both characters dream was clearly futile from the beginning of the novel as both characters seemed to depend on another person in order for their dreams to succeed. In Lennie’s case it was George and Curley’s wife needed a man. Steinbeck reinforces the themes of Power and powerlessness with links to the dream to suggest that there is some sort of bond with making the dream and having the power to make it. This portrayal in Of Mice and Men illustrated not only the fact that succeeding during the Great Depression was very limited but the fact that without power or status, which both characters lacked in, the chances of making the dream was nil.

Saturday, September 28, 2019

How government policy affect US banking system Essay

How government policy affect US banking system - Essay Example The demand-deposit control has turned banks into the middle agents and the principal agents in the US payment system and the financial transactions taking place. The costs and the benefits of banking regulation include the major industry changes witnessed that include internet and electronic banking; data processing models improved communication; and the development of more complex risk management and financial instruments (Graig, 1983). Government policy on the banking sector determines the outlook of the banking system, influencing the entry of players in the system and determining who is capable of engaging in the banking business. The policy definitions offer guidance on who can operate a bank, which services can be traded, and models of expansion that can be employed by banks. Banking regulation in the US increases the protection offered to depositors (Ambrose, Michael, &Â  Anthony, 2005). This adjusted effect came after the government recognized that increasing numbers were co nducting their business through banks, and deposits of businesses and individuals were increasing. Banking regulations imposed the improvement of financial and monetary stability. This was introduced in reaction to the recognition that there was an increasing level of transactions and businesses carried out among businesses and individuals (Spong, 2000). ... ation on the US banking structure increased competition among banking sector players, which was expected to improve the quality and the efficiency of the services delivered to customers (Graig, 1983). Government policy and regulation have increased the level of consumer protection observed in the US banking system, through a number of ways. These ways include safeguarding the moneys saved in the bank as well as improving the quality of services offered by banks (Rezende, 2011). The banking legislation and regulation of the 1970s and 1980s led to the development of a more open and competitive banking system. The policies also led to the adoption of technological models that could improve the quality of banking services offered in the US. The effect of the regulation also improved the capacity of banks to serve the increasing number of customers, as well as adapt to the changing economic environment. Some of the regulations that created this effect include the International Banking Act of 1978 (Spong, 2000), which required equitable treatment among both domestic and foreign bankers, in different areas, including reserve deposits, branching and the observance of banking regulations (Spong, 2000). The second was the financial Institutions Regulatory and Interest Rate Control Act of 1978, which sought to eliminate different forms of financial abuses (Federal Reserve, 2010). The Act also increased the capacity of regulatory agencies in avoiding the concentration of management and control. The other is the Depository Institutions and Deregulation and Monetary control Act of 1980 (Spong, 2000), which sought to ensure that the different financial institutions did their business from a more equal and efficient competition ground (Rezende, 2011). Government policy led to the

Friday, September 27, 2019

Children Face Asthma Risk If Mothers Exposed to Pollutants Essay

Children Face Asthma Risk If Mothers Exposed to Pollutants - Essay Example The article is based on research from Denmark which states that children exposed to chlorinated chemicals before their mothers gave birth to them are more likely to have asthma before they are 20 years old. Five other PCB compounds apparently have a weak relationship with asthma. The article describes how these pollutants are usually found in fish and other marine species and in pesticides. The author also points out that some PCBs were widely used in the 1960s and 1970s but now are banned. They have a tendency to linger in human cells, however, and this means that babies can be affected through their mothers. They can suffer wheezing and asthma because of these chemicals.After reading this article I realized that environmental pollution can have very long term effects. If people are using harmful products today, then it is possible that they will also harm the children of the future. This is an invisible danger which is hidden within the world around us and inside human bodies. What we need to do is read more articles about the environment and spread this kind of information across the world. If we ignore this problem, then our children and our children’s children will suffer in the future. It is our responsibility to think about the results of our actions. It is also our responsibility to take action when evidence like this is found. It is the time that we banned more of these products in order to protect the environment and the future of all the species on the planet.

Thursday, September 26, 2019

Contrast between Japanese Ninja Anime and American Ninja Cartoon Essay

Contrast between Japanese Ninja Anime and American Ninja Cartoon - Essay Example This is because if the movies lacked the bad people, then it would not have achieved the current audience level. The main difference between the two films is the level of engagement between characters. For instance in Ninja Clash in the Land of Snow, the characters maintain a fair relationship without getting into extreme action. In the first scenes, Naruto and his accomplices are assigned the role of protecting an actor during a filming procession. At first, the characters have a fair relationship and there is not much to report in terms of action and conflicts. Real action begins soon after the characters reach the land of snow where they were attracted by bad guys. Unlike TMNT, the Naruto the Movie: Ninja Clash in the Land of Snow has proactive action (Wiater 98). In this approach, the main characters only attract after they have been attacked. On the other hand, TMNT characters display active action. Through this approach, the characters go out in search of bad guys. The film is set in a crime-ravaged New York City where the ninja turtles are out to fight crime. Unlike the previous movie, the ninja turtles go out in search of criminals and engage them. The movie is more action packed than Ninja Clash in the Land of Snow. The action scenes in the film are fun to watch and they have a comic approach. Indeed, the action in the movie lacks a definite story of a purposeful theme (Rahimi 34). The turtles are involved in street fights to secure the place in the city and to curb lawlessness. However, the titles attack crime suspects even before confirming their involvement in crime. The films have striking artistic features. At the beginning, both movies have stunning colors. Unfortunately, things begin to... The two movies are related yet very different in terms of presentation and use of cinematography techniques. Moreover, the films have different ways of creating and presentation of characters. Nevertheless, the movies have a similar audience and their plot developments are almost similar. The two films use different approaches character creation. Although both films use hypothetical characters, there was a tendency to create a sense of reality among the directors. The American Teenage Mutant Ninja Turtle uses cartoons while Japanese Anime Ninja uses amines to develop its characters. Unlike cartoons, amines have distinct facial expressions that can be used to create a wide variety of physical characteristics. Thus, amines are closer to reality than cartoon (Eastman 123). On the other hand, cartoons have features that are far from being real (Wiater 98). Moreover, cartoons do not have proportional physical appearances. Amines can be used to tell real human stories while cartoons are us ed specifically for comic purposes. The attributes of cartoons and amine as described above create distinctive element between the two films. Moreover, the different approach to character development audience and plot. Indeed, plot and theme development in both movies was determined by the differences in character developments and creation. The movies have significant levels of similarity despite having different set up, themes and character selection. Both films tell ninja stories and elimination of crime and bad guys. The films also have disparities in their selection of colors and background structures. This makes them to attract different audiences and following.

Wednesday, September 25, 2019

Film Analysis on Product Development (of the film Kinky Boots) Essay

Film Analysis on Product Development (of the film Kinky Boots) - Essay Example Showing him wearing rubber shoes on his way to London signifies his lack of enthusiasm for the shoe production business. His relocation, together with his fiancà ©e, was to his liking as he wants to get away from his family’s business as soon as possible. But the unexpected death of his father forces him to move back to Northamptom and lay off his workers when he figured out there is no way for him to save the company. There was just not enough market for the shoes they are producing. The four generations that proudly carried on the tradition of Price & Sons over the years was on it last days when he entered the picture. Even his father was already set on selling the factory before his death. On an accidental meeting with drag queen Lola, Charlie was hit with an inspiration to create as he described it, â€Å"proper, good, decent, built-to-last boots† (Joel). As it was that men of their persuasion are forced to buy women’s shoes that are not sturdy enough to withstand the weight of a full size man. Drag queens have very specific needs and wants that women’s shoes do not have. The heels break and their feet would hurt as their weight is carried on their feet that are not supported with proper footwear. The brilliant idea of changing the product of Price & Sons hit Charlie and propelled him to do something to save his family’s company. â€Å"You exploit divergence to create a new category, and the expansion of that new category allows your brand to flourish† (Ries and Ries). This revolutionary marketing idea is one that was apparent in the movie ‘Kinky Boots.’ They were more than the first to take advantage of the marketing niche. They were able to create a new category that was distinguishably new in the shoe making industry. There was no other shoe company that specializes in selling shoes that are made for drag queens who prefer women’s design but are also be fabricated accordingly. With the concept, Charlie

Tuesday, September 24, 2019

Construction Contracts Essay Example | Topics and Well Written Essays - 1500 words

Construction Contracts - Essay Example One very important change is that the nomination for sub-contractors has been discarded. This will mean that the whole project will be the sole responsibility of the contractor. Hence, it will not matter if certain segments of the undertaking are passed on to other outside parties or agencies. Also, there will be minimal arguments to expect in the interpretation of the contract because the new JCT2005 is written in plain simple English and the major parts are segregated from one another in sections. Actually, the vital components and characteristics of the contents are retained except that the presentation has become less complicated or complex. Furthermore, certain terminologies have to be renamed to suit the real spirit intended by the parties. If there is a mutual agreement to have an overseer for the works, the title is now called Architect/Contract Administrator instead of just an Architect. Extension of Time is now termed Adjustment to the Completion Date. In case of a decision to suspend payment, the notice of withholding can only be done by the employer or the client. It can no longer be a part of the job of the Architect/Contract Administrator. ...In case of dispute, the covenanted remedy is now litigation instead of arbitration. As a matter of course, however, the parties are still free to resort to arbitration if they opt to thresh out their differences through the more expedient and convenient alternative method of patching up things. Still, there is a provision recommending for mediation in case of controversies. In the event that one party becomes insolvent, the other has to serve the appropriate notice of termination. Electronic mailing is now allowed as a medium of service of notices and other items for correspondence. The provision for the employer's own design team is still the same in JCT2005. However, a design option for the contractor is also provided for. On insurance pre-requisites, the contractor has become obliged to put up a profession al indemnity insurance, an agreement feature not included in the 1998 version. The right of the employer to liquidated damages reduction is set forth in the adjustment of the time for completion while the terms for relevant events are made more burdensome to the contractor who is to shoulder consequential costs brought about by materials and labour shortages resultant of industrial unrests like strikes. In such cases and similar instances, the Architect/Contract Administrator is under obligation to explain any adjustment to the completion date. In order to eradicate confusions regarding notices in the payment aspect of the covenant, the contractor under JCT2005 has the right to be paid according to the sum due considering the progress of performance even if he stated another amount in his application to collect and the employer withholds a certain portion.  

Monday, September 23, 2019

Understanding Buyers Value Essay Example | Topics and Well Written Essays - 750 words

Understanding Buyers Value - Essay Example Understanding Buyers Value Michael Porter (1991. pp103) presented an internal value chain of an organization from conceptualization to delivery of products to customers and argued that "Buyer Value is created when a firm lowers its buyer's cost or enhances buyer's performance". From the author's perspective, the buyer's value is the positive perception of the buyer herself/himself that the organization has earned amidst many factors that influence the perception. The factors may be behaviour with the buyer, communications carried out with the buyer, clarity & transparency of information provided to the buyer, understanding of buyer's need, personalization of the solution against buyer's needs, discount levels provided to the buyer, value added services provided to the buyer. and after sales services & product upgradation services provided to the buyer whenever requested. It may be possible that the buyer has carried out competitive pricing analysis before the bargaining and hence the seller has to either justi fy higher price by demonstrating tangible value additions or simply quote lower than competition to sell the products. Hence, Porter's argument about lowering of buyer's cost and enhancing buyer's performance again gets applicable if the buyer appreciates these facts from her/his perspective. The firm’s perspective can at the most be to control the factors (value chain management) that can achieve the positive perceptions of the buyer – what the buyer finally perceives is the actual value achieved by the firm. The author strongly agrees about the theory of reduced sacrifice undertaken by the buyer because it strongly influences the perception of the buyer regarding the firm. Discussion Points Elmaghraby and Keskinocak (2003. pp1288-1289) presented the mechanism of dynamic pricing to get the best benefits out of increased customer demands and reduced inventories. In such cases, the firms tend to increase their prices which definitely tend to increase the sacrifice level of customers to acquire the prices. The author wishes to discuss if such dynamic pricing strategies in the attempt to get the best out of "favorable conditions for the firm" cause long term damage to the value perceptions of the customers which may backfire especially when the demands eases. Slater and Narver (1998. pp1000-1005) presented that long term competitive advantages of companies can be improved by carrying out innovations more towards market orientation than customer orientation. This is primarily because customers are grossly ignorant about their needs. But on the contrary it is true that customers perceive value on their own based on their social influences and past experiences. The author wishes to discuss how companies should be able to control the perceptions of customers to achieve positive buyer value if this theory about market orientation should be trusted Conclusion: The author presented own perspective about buyer's value stating that this largely depends upon the factors that drive positive perceptions in customer's mind. The best that an organization can do is to apply effective efforts to achieve this positive perceptio

Sunday, September 22, 2019

Holborne - Pavane and Galliard Essay Example for Free

Holborne Pavane and Galliard Essay Holborne’s Pavane ‘The image of melancholy’ and Galliard ‘Ecce quam bonum’ (Behold, how good a thing is) are two pieces that belong to the genre of ‘consort music’, a form of domestic music that made its appearance in Elizabethan England. A consort may have derived from the French ‘concert’ which implied an ensemble of instruments or voices that perform. In later years, from about 1575, ‘Broken consorts’ were introduced and these included mixed ensembles. The usual instrumentation for a broken consort was lutes, viols (treble and bass) and flute. Consorts of viols began to appear during the time of Henry VIII with the earliest source of the music being a songbook of Henry VIII, found after his death that included copies of Viol consorts. There are three main types of consorts, one being the Pavane and Galliard, which is a dance form. In many of the pieces, the writing was very similar to that of contemporary writing for voices; therefore it was usually polyphonic in texture. When paired together, the Pavane usually takes the more melancholy character, while the Galliard a more cheerful one which is shown in these two movements by Holborne. Although dance forms were used for both movements, the dense counterpoint provides melodic interest for all five players and also listeners, which suggests the music to be more for listening than dancing. Not much is known about Holborne, but he did publish two collections of music with about 120 works altogether.

Saturday, September 21, 2019

The Challenger Address Essay Example for Free

The Challenger Address Essay Ronald Wilson Reagan was the 40th United States President after winning against the Democratic Challenger Jimmy Carter in the 1980 presidential election. Reagan won by attaining 50. 7% of the popular votes in that particular election. It can be said that the beginning of his term was not that pleasing. On his first year of term, he fired 11,345 air traffic controllers who, according to him violated the regulation of the government that prohibits unions from striking. Also, it had been said that unemployment in the United States increases to 10. 8%, which is greater than any time since the great depression during his first year in position. However, this percentage dropped during the rest of his presidency. It can also be said that after his first term, Reagan gained the respect and trust of the American people by being nominated for the second time as the president of the United States on the 1985 presidential election in which he was elected by landslide, having unprecedented number of votes. It was also said that during his years of presidency, restoration of prosperity was viewed by the people as well as global peace. It was also in the year 1986 that the income tax code was revised by Reagan wherein millions of people with low income were exempted. The year 1986 is not just a year of development in terms of social conditions in the United States; it was also a time wherein space exploration was greatly admired. By that time there had been several developments and expeditions that can be considered successful. Thus, the people especially Americans were very enthusiast to explore and to achieve greater heights in terms of space travel and exploration. However, this year was also a tragic time for the space exploration era when the Space Shuttle Challenger explodes a short while after lifting off, leaving none of the seven crews alive on January 28, 1986. The public was shocked at what they witnessed. Also, by that time, President Reagan was planning to deliver a speech to the American people but was compelled by the tragedy of the Challenger. Thus, President Reagan delivered a speech concerning the accident rather than the one that he intended to deliver to the people. This speech was well known as the â€Å"Challenger Address†. The accident had a great impact in the American people. The main reason was stated by the president himself in his â€Å"Challenger Addressed†. There had been a history in space exploration wherein lives were lost, three of them. The accident happened around 19 years before the Challenger tragedy. Although there had been cases of death regarding space exploration, the fact that the tragedy happen in mid air shocked the Americans and the whole world as well. There had never been any accident regarding space exploration that happened a few seconds after the shuttle’s takeoff. Two interpretations can be given why the speech of President Reagan was known as the â€Å"Challenger Addressed†. The first is, it was given that title since the shuttle was named Challenger. However, looking at the contents of the â€Å"Challenger Addressed†, it can be said the word Challenger does not refer to the name of the shuttle but rather to the contents and message of the speech. The president is challenging the nation to continue their search and to never lose heart in space exploration because of the Challenger accident. There had been several components of the speech that makes it effective and appealing to the American people. The very first part of the speech that Reagan used to encourage the people to pursue space explorations was by calling the astronauts heroes, Reagan stated, â€Å"We mourn seven heroes† (Reagan, 1986). He also states that the astronauts were well aware of the dangers that they must face but have overcome it. The president also showed his sympathy with the people especially the family of the astronauts. In order to gain the sympathy and heart of the people, they must know that you are one with them, in spirit and emotions. President Reagan expressed his anguish and mourned not only with the families of the astronauts but with the whole nation as well. At the beginning of his speech, President Reagan sympathized and mourned with the nation in order to appeal to them. However, his tone was developed in the succeeding paragraphs, from appealing to encouraging. He did this by saying, â€Å"The future doesn’t belong to the fainthearted; it belongs to the brave† (Reagan. 1986). These statements surely challenged the people at that time. The impact of the tragedy was reduced in an instant. The president also stated his desire to talk to the people of NASA and to tell and show them that their efforts are well appreciated as well as their sacrifices and bravery and encourage them to pursue their search despite the Challenger accident. Another part of the â€Å"Challenge Address† was the story of Sir Francis Drake, a great explorer who lived by the sea and died on it. President Reagan uses parallelism to connect the present situation into another situation in the past. Thus, those who died in the tragedy are compared to and are paralleled with Drake that creates a good impression for the seven crews of the Challenger. Just as Drake was considered a great man, they too are considered heroes and great explorers. President Reagan ends his speech by saying that the memories of the seven crews will never be forgotten. Thus, showing that they are valued as well as their sacrifices and hard work will never be in vain. This was also important in order to encourage the youth in pursuing to be an astronaut because of the fear that was established by the Challenger accident. It is important to know that your hard work is acknowledge by the society in general in order to be motivated and to pursue a certain career. One of the main factors why the speech was effective was the image, personality and credibility of the speaker. Being the president that time, people will surely listen to what the Reagan has to say. However, in terms of astronomical knowledge and the risk and sacrifices needed on being an astronaut, he cannot be considered a pro. There will surely be different sides of the address; it may either encourage the audience while others may see that the President was in no position to make such address. In every issue, there is always a positive and as well a negative aspect. However, it can be said that President Reagan was able to really challenge the people to search for greater achievements in terms of space exploration and taking the Challenger tragedy as a part of reaching greater knowledge and understanding of the universe, that every quest has its own risk as well as sacrifices that has to be overcome to really success in that particular area. The occasion or the time in which a speech was delivered was also a very important factor for a speech to be effective. The â€Å"Challenger Speech† will not have the same impact if it was delivered without such event, the Challenger tragedy, happening. Thus, the situation or the tragedy is a very big factor why the address made by President Reagan caught the attention of many people by that time and as well as the attention and interest of the people in the contemporary time. There are a lot of things that has to be considered in order to make an effective speech of which four components are primary; the speaker, the audience, the occasion and of course, the speech itself. It can be said that the â€Å"Challenger Address† by former President Reagan is effective because these components had been addressed properly. The speaker is credible enough for the people to listen to what he has to say as well as the occasion needed to gain the attention and interest of the audience. Of course, the audiences were concerned about the situation because of the impact of the tragedy. And of course, the speech itself was well made, something that is expected for a United States President. References Michigan State University Libraries. (No Date). Space exploration. Retrieved January 30, 2008 from http://www. lib. msu. edu/publ_ser/docs/displays/Displaymarch03. html. Reagan, R. (1986). The space shuttle ‘challenger’ tragedy address. Retrieved January 30, 2008 from http://www. americanrhetoric. com/speeches/ronaldreaganchallenger. htm. The White House. (No Date). President Ronald Reagan 1911-2004. Retrieved January 30, 2008 from http://www. whitehouse. gov/history/presidents/rr40. html.

Friday, September 20, 2019

Literature Review About Cryptography And Steganography Computer Science Essay

Literature Review About Cryptography And Steganography Computer Science Essay The initial forms of data hiding can truly be considered to be extremely simple forms of private key cryptography, the key in this case being the information of the scheme being implemented. Steganography books are overflowing with examples of such schemes used all through history. Greek messengers had messages written into their shaved heads, hiding the message when their hair grew back. With the passage of time these old cryptographic techniques improved in context of optimization and security of the transmitted message. Nowadays, crypto-graphical methods have reached to a level of classiness such that appropriate encrypted interactions can be assumed secure well beyond the practical life of the information communicated. In reality, it is expected that the most powerful algorithms using multi KB key capacity could not be covered through strength, even if all the computing resources worldwide for the next 20 years were dedicated on the attack. Obviously the chances are there that weaknesses could be found or computing power advancement could occur, but existing cryptographic schemes are usually adequate for most of the users of different applications. So why to chase the area of information hiding? A number of good reasons are there, the first is that security through obscurity is not basically a bad thing, provided that it isnt the only security mechanism employed. Steganography for instance permits us to conceal encrypted data in mediums less likely to draw attention. A garble of arbitrary characters being communicated between two clients may give a clue to an observant third party that sensitive data is being transmitted whereas kid images with some extra noise present may not. Added information in the images is in encrypted form, but draws much lesser interest being allocated in the images then it would otherwise. This becomes mainly significant as the technological discrepancy between individuals and institutions grows. Governments and businesses usually have access to more powerful systems and better encryption algorithms then individuals. Hence, the possibility of individuals messages being broken increases with each passing year. Decreasing the quantity of messages intercepted by the associations as suspect will definitely facilitate to progress privacy. An additional benefit is that information hiding can basically alter the way that we consider about information security. Cryptographic schemes usually depend on the metaphor of a portion of information being placed in a protected box and locked with a key. Anyone can get access with the proper key as information itself is not disturbed. All of the information security is gone, once the box is open. Compare it with information hiding schemes in which the key is inserted into the information itself. This contrast can be demonstrated in a better way by current DVD encryption methods. Digitally encoded videos are encapsulated into an encrypted container by CSS algorithm. The video is decrypted and played when the DVD player supplies the proper key. It is easy to trans-code the contents and distribute it without any mark of the author present, once the video has been decoded. On the other hand the approach of an ideal watermark is a totally different, where regardless of encryption the watermark remains with the video even if various alteration and trans-coding efforts are made. So it is clarifies the need for a combination of the two schemes. Beginning with a swift tour on cryptography and steganography, which structure the foundation for a large number of digital watermarking ideas then moving on to a description that what are the prerequisites a watermarking system must meet, as well as techniques for estimating the strengths of different algorithms. Last of all we will spotlight on various watermarking schemes and the pros and cons of each. Even though most of the focus is solely on the watermarking of digital images, still most of these same concepts can straightforwardly be applied to the watermarking of digital audio and video. Background First of all we begin with some definitions. Cryptography can be described as the processing of information into an unintelligible (encrypted) form for the purposes of secure transmission. Through the use of a key the receiver can decode the encrypted message (decrypting) to retrieve the original message. Stenography gets better on this by concealing the reality that a communication even took place. Hidden Information message m is embedded into a harm less message c which is defined as the cover-obect. With the help of key k which is called as stego-key the hidden message m is embedded into c. The resulting message that is produced from hidden message m, the key k and the cover object c is defined as stego-object s. In an ideal world the stego-object is not distinguishable from the original message c, seems to be as if no additional data has been embedded. Figure 1 illustrates the same. Figure 1- Illustration of a Stegographic System We use cover object just to create the stego object and after that it is disposed. The concept of system is that stego-object will almost be same in look and data to the original such that the existence of hidden message will be imperceptible. As stated earlier, we will suppose the stego object as a digital image, making it clear that ideas may be expanded to further cover objects as well. In a number of aspects watermarking is matching to steganography. Each of them looks for embedding information into a cover object message with almost no effect to the quality of the cover-object. On the other hand watermarking includes the extra requirement of robustness. A perfect steganographic system would tend to embed a huge quantity of information, ideally securely with no perceptible degradation to cover image. A watermarking system is considered to be n ideal which would inject information that cannot be eliminated/modified except the cover object is made completely unusable. After these different requirements there is a reaction, a watermarking scheme will frequently deal capacity and perhaps even a little security for extra robustness. Then a question arises that what prerequisites might a perfect watermarking system should have? The primary constraint would obviously be that of perceptibility. A watermarking system is useless if it degrades the cover object to the extent of being of no use, or even extremely disturbing. In an ideal scenario the water marked image should give the impression of being identical from the original even if it is viewed on the best class device. A watermark, considered to be ideal, must be highly robust, exclusively resistant to distortion when introduced to unintended attack while normal usage, or a intentional efforts to immobilize or eliminate the embedded watermark ( planned or malicious attack ). Unpremeditated attacks include alterations that are usually implemented to images while usual usage, such as scaling, contrast enhancement, resizing, cropping à ¢Ã¢â€š ¬Ã‚ ¦etc. The most interesting form of unintended attack is image compression. Lossy compression and watermarking are naturally at contrasts, watermarking try to encode hidden data in spare bits that compression tends to eliminate. So perfect watermarking and compression schemes are likely naturally restricted. In malicious attacks, an attacker intentionally attempts to remove the watermark, frequently via geometric alterations or by embedding of noise. A last thing to keep in mind is that robustness can consist of either flexibility to attack, or complete delicateness. It is the case in which various watermarking schemes may have need of the watermark to entirely demolish the cover object if any tampering is made. One more characteristics of ideal watermarking scheme is that it apply the implementation of keys to guarantee that the technique is not rendered ineffective the instant that the algorithm turns out to be recognized. Also it should be an aim that the method makes use of an asymmetric key scheme such as in public / private key cryptographic systems. Even though private key techniques are quite simple to apply in watermarking not like asymmetric key pairs which are normally not quite simple. The possibility here is that inserted watermarking scheme might have their private key found out, tarnishing protection of the whole system. It was just the scenario when a particular DVD decoder application left its secret key unencrypted, violating the whole DVD copy security system. A bit less essential necessities of a perfect watermarking scheme might be capacity, and speed. A watermarking scheme must permit for a helpful quantity of information to be inserted into the image. It can vary from one single bit to several paragraphs of text. Additionally, in watermarking schemes destined for embedded implementations, the watermark embedding (or detection) shouldnt be computationally severe as to prevent its use on low cost micro controllers. The final probable constraint of a perfect watermarking scheme is that of statistical imperceptibility. Watermarking algo must adjust the bits of cover in an approach that information of the image are not altered in any telltale style that may deceive the existence of the watermark. So it is not relatively lesser essential constraint in watermarking as compared to steganography but few applications might need it. Then how to provide metrics for the assessment of watermarking methods? Capacity and pace can be simply estimated using the # of bits / cover size, and calculational complications, respectively. Use of keys by systems is more or less by characterization, and the informational indistinguishable by association among original images and watermarked equivalent. The other complicated assignment is making metrics for perceptibility and robustness available. Standards proposed for the estimation of perceptibility are shown as in Table. Level of Assurance Criteria Low Peak Signal-to-Noise Ratio (PSNR) Slightly perceptible but not annoying Moderate Metric Based on perceptual model Not perceptible using mass market equipment Moderate High Not perceptible in comparison with original under studio conditions High Survives evaluation by large panel of persons under the strictest of conditions. Table Possible assurance stages of Perceptibility Watermark must meet exposed minimum requirements the Low level in order to be considered handy. Watermarks at this stage should be opposing to general alterations that non-malicious clients with economical tools might do to images. As the robustness enhances more specific and expensive tools turn out to be needed, in addition to extra intimate information of the watermarking scheme being used. At the very top of the scale is verifiable dependability in which it is also computationally or mathematically unfeasible to eliminate or immobilize the mark. In this chapter a brief introduction of the background information, prerequisites and assessment methods needed for the accomplishment and estimation of watermarking schemes. In the next chapter a variety of watermarking techniques will be narrated and will be considered in terms of their potential strengths and weaknesses. Selection of Watermark-Object The most basic query that is required to think about is that in any watermark and stenographic scheme what sort of form will the implanted message will have? The most simple and easy consideration would be to insert text string into the image, permitting the image to straightly hold information such as writer, subject, timeà ¢Ã¢â€š ¬Ã‚ ¦and so on. On the other hand the negative aspect of this technique is that Ascii wording in a way can be well thought-out to be a appearance of LZW compression technique in which every character being characterized with a definite model of bits. Robustness of the watermark object suffers if compression is done prior to insertion. As the structure of Ascii systems if a single bit fault is occurred due to an attack can completely alter the semantics of a certain letter and thus the hidden message is also changed or damaged. It would be fairly trouble-free for even a simple assignment such as JPEG compressing technique to trim down a copy right string to a random set of typescript. Instead of characters, why not embed the information in an already highly redundant form, such as a raster image? Figure 2 Ideal Watermark-Object vs. Object with Additive Gaussian Noise Note that in spite of the huge quantity of faults made in watermark discovery, the extracted watermark is still extremely identifiable. Least Significant Bit Modification The most uncomplicated technique of watermark insertion, is considered to be to embed the watermark into the least-significant-bits (LSB) of the cover object .Provided the surprisingly elevated channel capacity of using the whole cover for communication in this process, a smaller object may be embedded several times. Even if many of them are vanished due to attacks, only a one existing watermark is considered to be a success. LSB replacement though in spite of its straightforwardness brings a crowd of weaknesses. Even though it may continue to exist if alterations such as cropping, noise addition or compression is probable to overcome the watermark. And an enhanced tamper attack will be basically to replace the lsb of every pixel by 1, completely overcoming the watermark with minor effect on the original image. In addition, if the algorithm is found out, the inserted watermark could be simply altered by an intermediary party. An enhancement on fundamental LSB substitution will be to apply a pseudo-random digit initiator to decide the pixels to be utilized for insertion supported on a provided seed . Protection of the watermark will be enhanced as the watermark could not be simply observed by middle parties. The scheme still would be defenseless to the replacement of the LSBs with a constant. Also if those pixels are used that are not utilized for watermarking bits, the effect of the replacement on the image will be insignificant. LSB alteration seems to be an easy and reasonably potent instrument for stenography, but is deficient of the fundamental robustness that watermarking implementations require. Correlation-Based Techniques An additional procedure for watermark insertion is to make use of the correlation characteristics of additive pseudo random noise patterns as applied to an image. A pseudorandom noise (P) pattern is embedded to the image R(i, j), as mentioned in the formula shown below. Rw (i, j) = P (i, j) + k * Q(i, j) Insertion of Pseudorandom Noise k represents a gain factor Rw is the watermarked image. Amplifying k amplifies the robustness of the watermark at the cost of the excellence of the watermarked image. To retrieve the watermark, the same pseudo-random noise generator algorithm is seeded with the same key, and the correlation between the noise pattern and possibly watermarked image computed. If the correlation exceeds a certain threshold T, the watermark is detected, and a single bit is set. This method can easily be extended to a multiple-bit watermark by dividing the image up into blocks, and performing the above procedure independently on each block. In different of ways this fundamental scheme can be enhanced. 1st, the concept of a threshold being utilized for defining a binary 1 or 0 can be removed with the utilization of two different pseudorandom noise sequences. One sequence is allocated a binary 1 and the second a 0. The scheme which is mentioned previously is then carried out one time for every sequence, and the sequence with the superior resulting correlation is exercised. It amplifies the possibility of a accurate discovery, still after the image has been considered to attack . We can additionally enhance the technique by prefiltering image previous to implementing the watermark. If we can decrease the correlation among the cover image and the PN pattern, we can amplify the resistance of the watermark to extra noise. By implementing the edge improvement filter as given below, the robustness of the watermark can be enhanced with no loss of capability and with a very less lessening of image features. Edge Enhancement Pre-Filter Instead of defining the watermark values from blocks in the spatial domain, we can make use of CDMA spread spectrum Schemes to spread every of the bits arbitrarily all over the original image, amplifying capability and enhancing immunity to cropping. The watermark is initially converted into a string instead of a 2 dimensional image. For every single pixel value of the watermark, a PN pattern is produced by making use of an self-sufficient key or seed. These keys or seeds could be stocked or created by itself via PN techniques. The addition of every one of the PN sequences stands for the watermark, which is then up sized and embedded to the original image . To discover/extract the watermark, every seed/key is utilized to produce its PN pattern, which is after that correlated with the whole image. If it results with high correlation, then a bit of a watermark is assigned as 1, else 0. The same procedure is done again and again for each and every value of the watermark. CDMA enhances on the robustness of the watermark considerably, but needs quite a few sequences further of calculation. Frequency Domain Techniques A benefit of the spatial domain methods has been talked about previously is that it can be simply implemented to any image, in spite of several type of intentional or unintentional attacks (though continuation to exist this alterations is totally a diverse issue). A probable drawback of spatial methods is that utilization of these subsequent alterations with the aim of amplifying the watermark robustness is not permitted by them. Besides to this, adaptive watermarking schemes are a little extra tricky in the spatial domain. If the characteristics of the original image could correspondingly be utilized both the robustness and quality of the watermark could be enhanced. For the moment, instead of detail areas it is usually favorable to conceal watermarking data in noisy areas and edges of images. The advantage is 2 fold; it is extra perceivable to the HVS if degradation is done in detail areas of an image, and turns out to be a primary objective for lossy compression rechniques. In view of these features, making use of a frequency domain turns out to be a bit more attractive. The traditional and yet well accepted domain for image processing is the Discrete-Cosine-Transform (DCT). The Discrete-Cosine-Transform permits an image to be divided into different frequency bands, which makes it simple and easy to embed watermarking message into the mid frequency bands of an image. The reason behind selecting the middle frequency bands is that they have reduced even they evade low frequencies (visual areas of the image) exclusive of over-rendering themselves to elimination via compression and noise attacks (high frequencies). One of the methodologies makes use of the relationship of middle frequency band of DCT variables to encrypt a bit into a DCT block. Following 88 block shows the division of frequencies in terms of low, middle and high bands. DCT Regions of Frequencies FL represents the low frequency section of the block, whereas FH represents the higher frequency section. FM is selected as the region where watermark is embedded so as to give extra immunity to lossy compression schemes, at the same time evading noteworthy amendment of the original image . Then two positions Ai(x1, y1) and Ai(x2, y2) are selected from the middle frequency band area FM for evaluation. Instead of selecting random positions, if our selection of coefficients is according to the suggestion of JPEG quantization we can attain additional toughness to compression as given in the chart below. We can think positive that some sort of scaling of a coefficient will increase the other by the equal aspect if two positions are selected such that they have similar quantization values, which helps in maintaining their comparative ratio of size. 16 11 10 16 24 40 51 61 12 12 14 19 26 58 60 55 14 13 16 24 40 57 69 56 14 17 22 29 51 87 80 62 18 22 37 56 68 109 103 77 24 35 55 64 81 104 113 92 49 64 78 87 103 121 120 101 72 92 95 98 112 100 103 99 JPEG compression scheme quantization values By observing the above chart we can see that coefficients (4,1) and (3,2) or (1,2) and (3,0) would formulate appropriate contenders for contrast as we can see that there quantization values are similar. The DCT block will set a 1 if Ai(x1, y1) > Ai(x2, y2), else it will set a 0. The coefficients are then exchanged if the associative size of every coefficient does not agree with the bit that is to be encoded . Because it is usually considered that DCT coefficients of middle frequencies contain analogous values so the exchange of such coefficients should not change the watermarked image considerably. If we set up a watermark strength constant k, in a way that Ai(x1, y1) Ai(x2, y2) > k then it can result in the enhancement of the robustness of the watermark. Coefficients that do not meet these criteria are altered even if the utilization of arbitrary noise then convinces the relation. Mounting k thus decreases the possibility of finding of errors at the cost of extra image degradation. An additional probable method is to insert a PN string Z into the middle frequencies of the DCT block. We can alter a provided DCT block p, q by making use of equation below. Embedding of Code Division multiple access watermark into DCT middle frequencies For every 88 block p,q of the image, the DCT for the block is initially computed. In that block, the middle frequency elements FM are incorporated to the PN string Z, multiply it by k the gain factor. Coefficients in the low and middle frequencies are copied over to the converted image without having any effect on. Every block is then contrary converted to provide us our concluding watermarked image OZ .

Thursday, September 19, 2019

The Downfall Of Macbeth In Mac :: essays research papers

People and ideas can greatly affect the outcome of a person's life, determining whether the outcome will be successful or disastrous. Decisions and actions can also influence outcome. This is the case in Macbeth. Many factors cause the ruin of Macbeth and for that reason, all the blame for his downfall cannot be placed on Macbeth himself, despite the fact that he is the one that commits or has people commit the murders which lead to his downfall. Lady Macbeth's encouragement and convincing lead Macbeth to take the first step towards his destruction. The witches and their prophecies are equally accountable, since the witches reveal their predictions to Macbeth, giving him a glimpse into his future. This glimpse represents the beginning of the end of his life. Macbeth and Lady Macbeth, as well as the witches and their prophecies are all responsible for Macbeth's downfall.   Ã‚  Ã‚  Ã‚  Ã‚  The witches are responsible for the downfall of Macbeth because they are the ones which reveal the prophecies to Macbeth. 1. Witch. All hail, Macbeth! Hail to thee, Thane of Glamis! 2. Witch. All hail, Macbeth! Hail to thee, Thane of Cawdor! 3. Witch. All hail, Macbeth, that shalt be King here- after!1 If Macbeth had never encountered the witches, they would never have revealed the prophecies to him. He would have become the Thane of Cawdor, and he would never have even considered the idea of making himself the King of Scotland. It would have remained a fantasy that would probably never have come true in the way that it did. The witches are the ones who allow Macbeth to discover his future, and by doing this, they give him the opportunity to consider making the prophecy come true. The only way to do this is to murder Duncan, the present King of Scotland. At first he is reluctant to do so. Lady Macbeth points out that he has the perfect opportunity, since the King will be spending the night at their castle, Inverness. Macbeth's conscience, however, is holding him back from committing the murder. He's here in double trust: First, as I am his kinsman and his subject, Strong both against the deed; then as his host, Who should against his murderer shut the door. Not bear the knife myself. (I. Vii. ll 12-16) He realizes that he has a responsibility to Duncan to protect him from a murderer and not to actually murder Duncan himself. Macbeth is also supposed to be loyal to the king, especially since he is a relative and a subject.

Wednesday, September 18, 2019

Essay --

Overview Upon entering the field of medicine, physicians agree to practice according to the Hippocratic Oath which states, â€Å"first, do no harm.† Although it seems like this task would be straight forward, it is not always clear on how to carry out this oath. One example of where conflicting viewpoints are often argued is how to resolve child abuse cases such as Munchausen Syndrome by Proxy (MSbP). MSbP is a relatively new form of child abuse in which a parent deliberately fabricates illnesses in their child to receive medical attention. Over the past 30 years, the recognition and prevalence of MSbP has increased, however, it is still difficult to identify and is highly under diagnosed (Maldonado, 2002). It has stirred much controversy and even the name itself is a topic of debate because MSbP is hard to define. Other terms to describe the condition include factitious disorder by proxy, pediatric condition falsification, and medical child abuse (Lowen & Reece, 2008). In treating a child that may be a victim of MSbP, using covert video surveillance (CVS) is controversial because blurred lines exist between using for it for diagnostic reasons versus legal reasons. When conducting CVS, typically two hidden cameras, placed in objects such as a wall clock or a ceiling light, monitor a parent’s activity without their knowledge (Shabde & Craft, 1999). A member of the care team, such as a nurse, and a police officer observe the video footage from a remote site to look for suspicious activity and intervene if the parent begins to harm the child. Some activity that is typically seen on footage include a parent poisoning their child with cleaning solution or salt, removing medical devices such as tubes, and smothering the child. Clinical Per... ...rs of the care team is communicating effectively with each other. Personal Perspective After researching the various viewpoints regarding CVS, I still maintain that it is ethical when used in good faith in order to protect the child. Many case studies have shown that not only is the child more likely to be put in a safer environment when child protective services has the evidence to intervene but it also saves siblings from potential harm. I believe that the physician’s role is to act in the best interest of the patient, especially when outside factors pose a threat to the child’s well-being. Although many view CVS as unethical because it infringes on privacy rights, it can also provide a great benefit. Studies such as the one conducted by David Southall make it difficult to dispute that there is not a value from using CVS as a form of diagnosis and intervention. Essay -- Overview Upon entering the field of medicine, physicians agree to practice according to the Hippocratic Oath which states, â€Å"first, do no harm.† Although it seems like this task would be straight forward, it is not always clear on how to carry out this oath. One example of where conflicting viewpoints are often argued is how to resolve child abuse cases such as Munchausen Syndrome by Proxy (MSbP). MSbP is a relatively new form of child abuse in which a parent deliberately fabricates illnesses in their child to receive medical attention. Over the past 30 years, the recognition and prevalence of MSbP has increased, however, it is still difficult to identify and is highly under diagnosed (Maldonado, 2002). It has stirred much controversy and even the name itself is a topic of debate because MSbP is hard to define. Other terms to describe the condition include factitious disorder by proxy, pediatric condition falsification, and medical child abuse (Lowen & Reece, 2008). In treating a child that may be a victim of MSbP, using covert video surveillance (CVS) is controversial because blurred lines exist between using for it for diagnostic reasons versus legal reasons. When conducting CVS, typically two hidden cameras, placed in objects such as a wall clock or a ceiling light, monitor a parent’s activity without their knowledge (Shabde & Craft, 1999). A member of the care team, such as a nurse, and a police officer observe the video footage from a remote site to look for suspicious activity and intervene if the parent begins to harm the child. Some activity that is typically seen on footage include a parent poisoning their child with cleaning solution or salt, removing medical devices such as tubes, and smothering the child. Clinical Per... ...rs of the care team is communicating effectively with each other. Personal Perspective After researching the various viewpoints regarding CVS, I still maintain that it is ethical when used in good faith in order to protect the child. Many case studies have shown that not only is the child more likely to be put in a safer environment when child protective services has the evidence to intervene but it also saves siblings from potential harm. I believe that the physician’s role is to act in the best interest of the patient, especially when outside factors pose a threat to the child’s well-being. Although many view CVS as unethical because it infringes on privacy rights, it can also provide a great benefit. Studies such as the one conducted by David Southall make it difficult to dispute that there is not a value from using CVS as a form of diagnosis and intervention.

Tuesday, September 17, 2019

Outline and Evaluate Biological Therapies as Treatments of Depression

Outline and evaluate biological therapies as treatments of depression.There many forms of treatment to cure depression, many of which are biological. These target the physical and chemical side of the body.Anti-depressants and other drugs are the most common forms of treating depression. They work by boosting levels of insufficient neurotransmitters such as serotonin and nor-adrenaline. They will either reduce the amount of re-absorption or block the enzyme that is trying to break down neurotransmitters. Allowing to increase the amount of neurotransmitter available , so that neighbouring cells become excited.Tricyclics are used to block the transporter mechanism that re-absorbs both serotonin and nor-adrenaline into the pre-synaptic cell after it has fired. As a result to this there are more neurotransmitters left in the synapse making the transmission of the next impulse easier.The treatment of depression has three phases which it is to go through, the first being acute in which the treatment of current symptoms takes place. Then it goes into the continuation phase in which the treatment is continued for six months where the medication is gradually withdrawn to prevent relapse. Lastly the maintenance phase which is recommended for individuals who have a reccurent depressive episode.One of the most common anti-depressant drugs used are Selective Serotonin Re-uptake Inhibitors (SSRI's), these are associated with serotonin which have been found to be involved in depression. The SSRI's will work by stopping the nerve cells from re-absorbing serotonin that has been released into the synapse, which in turn will increase the amount of serotonin available.However SSRI anti-depressants may not be able to treat all forms of depression. Kirsch et al (2008) found that only in the most severe cases of depression there was a significant advantage of using the SSRI. Therefore showing that anti-depressants may not be able to help those with mild or moderate depression.Another type of anti-depressants are mono amine oxidise inhibitors (MAOIs) that work with nor-adrenaline in the synapses. These block the enzyme mono amine oxidise from taking apart noradrenaline, and thus increase the amount of noradrenaline that is available at the synapse. Low levels of noradrenaline in certain areas of the brain has been linked with depression, and so MAOIs are an effective antidepressant.Nonetheless in the case of children and adolescents anti-depressants may fail to work altogether. Hammen (1997) found that anti-depressants appeared to be less useful with children and adolescents than with adults. This could be due to the fact that there are varied developmental differences in their brain neurochemistry and thus equaling to children not being as effected by the anti-depressants. Which may mean that other forms of treatment may need to be considered when treating a depressed child but could and but could also question the overall effectiveness of anti-depressants.Ther e are also safety concerns to do with SSRI's such as an increased use of them may lead to an increase in suicidal thoughts within vulnerable people. Ferguson er al (2005) did a review of studies that found that those in a SSRI condition compared to a placebo condition, are twice as more likely to attempt suicide.This risk however has been found higher amoung adolescents than adults. Suggesting that anti-depressants may in fact be more harmful than beneficial to a depressive individual.Another issue with the treatment of depression is that there may be a misdiagnosis in treatment due to age. Benek-Higgins et al (2008) found that because of the symptoms of depression are masked over by the natural changes in the elderly and their lifestyles. Therefore anti-depressant medication is less likely to be prescribed to them, which may lead to depression in the elderly not being treated at all.It has been found to be harder to treat the elderly also, this is because they are less likely to se ek professional help as they feel that there is a social stigma attached to being â€Å"mentally ill† and do not wish to lose their independence if they are diagnosed. This equals to them not being diagnosed and in turn equalling to them not being treated of their depression.However using a placebo during an experimental treatment may be an ethical issue. As lying to a depressed individual, that they are taking medication to make them better could psychologically make them worse upon learning that they have been lied to. Therefore a thorough debriefing and regular follow-ups will be needed for the individuals.There is also the risk of a publication bias, Turner et al (2008) found as authors have suggested that there is selective publication made to emphasise the positive outcomes of anti-depressant treatments. Drug companies may try to present their drugs as positive even if they aren't. Therefore biased conclusion may lead to inappropriate treatment decisions.Many therapies such as drug therapy are conducted regularly to treat depression but there is no answer on how to measure the effectiveness of them. As how are we meant to know when the patient has been â€Å"cured†, as there is no particular destination that one is trying to get towards. So there is no particular time to measure the effectiveness, whether that be during the therapy or 6 or so months after. Therefore the use of drugs etc may not be as effective as we think, because they do not lead us to a clear cure.Electroconclusive therapy (ECT) involves applying electrodes to a patient's head and passing an electric current through their brain. This will then cause a seizure to occur for a few seconds, but it is not clear as to why ECT works and how. Oxygen is given to the patient during the treatment to compensate for their inability to breathe, and the treatment will be given to the patient three times per week depending on the severity of their depression.It is used in the most severe cases, where a patient is at danger of harming themselves or is extremely suicidal, and anti-depressants and therapy are not having any effect on the the patient. The seizure from the ECT is said to regulate the mood of the patients, which will decrease their depressive episode.Yet there are many side effects to the use of ECT, for example when ECT was first introduced it resulted in injuries such as broken bones, however due to the modern change in treatment and the use of muscle relaxants and the therapy being taken place under anesthetic has decreased the likelihood of injuries. Although memory loss is very likely to follow prior to ECT, it isn't said how long the memory loss may last. Although ECT has been found to be effective to those who have depression the negatives may outweigh the positives to some extent.There is much evidence that supports the effectiveness of ECT, for example Gregory et al (1985) found that in comparison to sham ECT , which is when the patient is not a naesthetised during ECT, have been found to have a significant difference in outcome in favour of the real ECT. Therefore showing that within ECT itself it may be very effective for people with depression.In contrast to anti-depressants, ECT have been shown to be more efficient that anti-depressants, Scott (2004) found that in the short term treatment ECT was better than drug therapy. Which again supports the effectiveness of ECT and how it should be used more often.One way of minimising the cognitive problems that are associated with ECT, which is to use unilateral ECT, where the electrodes are placed only on one side of the skull, rather than the bilateral ECT where the electrodes are placed on both sides of the skull. Studies found that the unilateral is less likely to cause cognitive problems than the bilateral. This shows that to use unilateral ECT would be much more effective than bilateral ECT, and could cause less side effects.A concern with ECT is the consent of the patient s receiving the treatment, the DOH report (1999) found that 59% of 700 patients whom had received ECT admitted to not giving consent to treatment. Even when the patients volunteer to reicieving the treatment there was still an issue with fully informed consent about the side effects. Therefore ECT may not be given to all patients with fully informed consent and could be seen as ethically incorrect.

Monday, September 16, 2019

Norms

Norms Norm is an expected and accepted behavior by a society. We get our norms from our parents, cultures, or traditions, but sociologists disagree on where they can come from. Norms are based on a kind of agreement, so they can be changed by time which Is called social construction. People also see norms as a ‘social glue' as It binds different Individuals together. A norm requires an action as It Is a behavior. An example of a norm can be the fact that most of the people put on their stables once they get in the car.Norms are passed on from generation to generation and ‘adapted o fit the social climate' which is the change of norms, values, family, gender, race, etc. However, there are people who don't follow the norms and they're called deviants. Fox is a sociologist who spent 3 years observing the English norms, cultures and wrote a book based on her studies. One of the thing that caught her attention was the use of mobile phones which seemed to be In everyone's life regardless of class, gender, ethnicity and, increasingly, age.Fox mentioned in her book that people use it for different causes, teenagers use them as a status symbol whereas man are interested in the technological aspects of what they can do. She also believes that women that are alone in coffee bars or anywhere else use it as a social barrier or a form of attachment. Values Values are everyday morals or beliefs which most of the people in society agrees on. They develop overtime and not easily but they can be changed. Values can also underline social norms, for example when you're at the shop and you get to the end of the queue you value fairness.Also when you stay quite in the doctor's waiting room you value health and professional advice. Most people In the same societies hare these values so they're not the same as attitudes, in which people can differ enormously. You may think there are some values that are only yours but the truth Is they're shared with many others. You've l earned them from other people; this doesn't mean you chose to them from deferent possibilities, but that you're picked them up during your life.There is a debate between sociologists on whose values are the mainstream ones in society; it can be the dominant ethnic group, or even the values of the rich, but some consider It's the politician as they propose the laws of society. Values largely vary between societies, so what Is normal here can be really strange In another country. Status Status can be held by one person or a group; it's formed on a social position. It can be linked with honor, prestige and social standing.You can have a low and a high status In society at the same time, for example you're the leader of a gang, within that status and achieved status. Ascribed status cannot be changed easily, it's something that you did born into, for example your gender and ethnicity. Achieved status is what you worked for, it can be an educational qualification or entering the Job you always wanted. Achieved status is believed to be a relevant feature of life in the contemporary I-J. Roles The set of norms that goes with a status are called roles.A role is a series of behavior; routines or responses that we give in our everyday life. We all have roles in our lives which can change with our age and adapt to our societies. You're role can be a student at school, sibling and friend at the same time and all these roles will come with expectations. As a student you'll be expected to learn and participate in class and do your homework. You as a student will place certain expectations on your coacher and school. Roles develop during social processes but we do born into some roles like being a daughter or a son and sibling, these are all ascribed to people.Role conflict can occur as a person has many roles and sometimes these roles will conflict with each other. For example you can be a student expected to do your homework, but you also have a part time Job and your boss expects you to be there, but you can't do both of them at the same time. Having a role conflict is an unavoidable part of life. Culture The word culture is used to describe the customs, beliefs and ways of life of a society r within a society. It is also a contested concept, which means that sociologists vary on their exact definition of it.William says it's a Way of life' and that it contains all details of the way people live their life in a society: their norms, interests, values and ideas on life. If we take the meaning of culture this way it becomes a comprehensive definition, allowing us to connect it to many different groups within and between societies. Some people argue that Williams view on culture is so wide that it has no meaning at all, because he practically says that anything can be a part of culture. Another sociologist Woodward says that the culture of society is formed on ‘shared meanings, values and practices'.This definition links culture with shared norms and values. Other sociologist approached that there are different types of cultures, saying that there is a high culture, and elite practices are part of it. High culture High culture is the elite, upper class of the society, the people that have an ascribed status in life. This concept is linked to Leaves who was writing in the sass. People in high culture are often associated with arts such as classical music, opera or sports eke polo and lacrosse and other posh activities.They have a social closure which practically means that there is no entry for ‘outsiders', which makes sure that high culture remains elite and exclusive. People in high culture tend to have special positions in the I-J, both economically and socially. However, some sociologist questioned the existence of high culture as more people can achieve their statuses and become rich, so they can buy their access to elite groups. Subculture Subculture is practiced by a smaller group in society. They have distinct norms and aloes which makes them a little part of society.These cultures can be memos and skaters, or religious groups such as cosmetologists. As some of these subcultures are quite little they need to raise more awareness for example religious movements. The members of these subcultures change over time and so does the subcultures within society and its concerns. People mostly are part of these subcultures in their young adulthood, and often they move away from them as they grow up. However, some people stay connected to their subculture in some way for the rest of their lives.

Sunday, September 15, 2019

Pathophysiology Case Study Essay

Patient Case Question 1: For which condition is this patient likely taking nifedipine? Nifedipine is a calcium channel blocker used to treat high blood pressure and chest pain. Patient’s past medical history indicates that he has had hypertension â€Å"for years,† the patient is most likely taking Nifedipine to manage this condition. May also be taking nifedipine so as to prevent chest pain from his past condition of Coronary Artery Disease (CAD). Patient Case Question 2: For which condition is this patient likely taking lisinopril? Lisinopril is an ACE inhibitor that treats high blood pressure and heart failure. Patient could be taking lisinopril in tandem with nifedipine to manage his hypertension and Coronary Artery Disease. Patient Case Question 3: For which Condition is this patient likely taking paroxetine? Paroxetine is used to treat various mood disorders. It is most likely that the patient is taking paroxetine to treat his generalized anxiety disorder, which he has been experiencing for the past 18 months (according to his past medical history). Patient Case Question 4: What is meant by â€Å"tenting of the skin† and what does this clinical sign suggest? â€Å"Tenting of the skin† involves a skin turgor test. By pulling a fold of skin from the back of the hand, lower arm, or abdomen with two fingers one can assess the ability of the patient’s skin to change shape and return to normal (elasticity). â€Å"Tenting of the skin,† indicates that the skin is not returning to normal quickly, which means the person has severe dehydration, a fluid loss of 10% body weight. The result of his skin turgor test indicates late signs of dehydration (patient had skin with poor turgor), and the presence of tenting in the skin indicates the severity of his dehydration. Patient Case Question 5: Are the negative Grey Turner and Cullen signs evidence of a good or poor prognosis? A positive test for Cullen sign occurs when a patient has superficial bruising in the subcutaneous fat around the umbilicus. A positive Grey  Turner test occurs when a patient has bruising of flanks (last rib to top of hip), which indicates a retroperitoneal hemorrhage. Both Cullen and Grey Turner signs are used to indicate/predict acute pancreatitis, when these signs are present one has a high rate of mortality (37%). The patient tested negative for both Grey Turner and Cullen signs, so his prognosis is good. Patient Case Question 6: Identify THREE major risk factors for acute pancreatitis in this patient. Patient has sinus tachycardia, paired with the patient’s severe dehydration the patient is showing signs of having acute pancreatitis. Patient also has a history of alcohol abuse and is regularly taking ACE inhibitors, which puts him at a high risk of developing acute pancreatitis. Patient also has diminished bowel sounds that indicate possible acute pancreatitis. Patient Case Question 7: Identify TWO abnormal laboratory tests that suggest that acute renal failure has developed in this patient. Patient’s Blood Urea Nitrogren (BUN) level is 34 mg/dL; which indicates decreased kidney function. Patient has a potassium level of 3.5 meq/L which is below normal range (3.7- 5.2 meq/L), this indicates possible renal artery stenosis. Both of these lab results suggest that the patient has developed acute renal failure. Patient Case Question 8: Why are hemoglobin and hematocrit abnormal? Patient’s hemoglobin level is 18.3 g/dL, normal hemoglobin levels for men are between 14 and 18 g/dL. Patient’s hematocrit level is 53%, normal hematocrit levels are 40-50%. This abnormally high lab results indicate early stages of kidney disease and anemia. Patient has developed acute renal failure, so these test results are as expected for a patient under such conditions. Patient Case Question 9: How many Ranson criteria does this patient have and what is the probability that the patient will die from this attack of acute pancreatitis? Patient has seven points of Ranson criteria. Patient’s WBC count was over 16K, patient is over age 55, patient’s blood glucose level was higher than  200 mg/dL, patient’s LDH level was over 350, patient had high BUN level, and Patient had high fluid needs due to his dehydration. Patient’s predicted mortalitiy is 100% based upon the Ranson criteria, so it is very likely that the patient will die from this attack of acute pancreatitis. Patient Case Question 10: Does the patient have a significant electrolyte imbalance? Patient has a sodium level that is 1 meq/L below normal range, and a potassium level 0.2 meq/L below normal range. This indicates that the patient is having renal complications that are interfering with electrolyte balance. Patient Case Question 11: Why was no blood drawn for an ABG determination? No blood was drawn for an ABG determination because patient’s lungs were clear to no auscultation, so no test was needed to test patient’s blood PH. Also patient had urine with a PH within normal range, so an ABG test was not really needed.

Saturday, September 14, 2019

The Effecrs Of Employee Satisfaction Essay

This week’s reading covered regression and inferences about differences. Regression is a statistical measure that attempts to determine the strength of the relationship between one dependent variable and a series of other changing variables. This information helps determine what factors affect certain outcomes and which do not. This article was really interesting as it explored a very realistic question of whether positive employee attitudes and behaviors influence business outcomes or whether positive business outcomes influence positive employee attitudes and behaviors. At its core concept, regression takes a group of random variables, thought to be predicting an outcome, and tries to find a mathematical relationship between them. This relationship is typically linear and takes into account all the individual data points. The hypothesis in this study by Daniel Koys was that employee satisfaction, organizational citizenship behavior, and employee turnover influence profitabil ity and customer satisfaction. Data was gathered from a restaurant chain using employee surveys, manager surveys, customer surveys, and organizational records. Regression analyses showed that employee attitudes and behaviors at a given ‘Time 1’ were related to organizational effectiveness at given ‘Time 2’ however additional regression analyses show no significant relationship between organizational effectiveness at Time 1 and the employee attitudes and behaviors at Time 2. Overall it was determined that employee behaviors have a more direct impact on organizational effectiveness than do employee attitudes, especially when the concept of organizational effectiveness includes profitability as well as customer attitudes towards the restuarant. Further research was conducted in a restaurant chain to determine the relationship between employee satisfaction on organizational citizenship. Employee satisfaction was measured using a survey of hourly employees. Organizational citizenship behavior was measured via a survey of the employees’ managers. Results from the study showed in Year 1, 774 hourly employees (average of 28 per unit)Â  and 64 managers (average of 2 per unit) responded to the surveys. In Year 2, 693 hourly employees (average of 25) and 79 managers (average of 3) responded. Customer satisfaction was measured by a survey conducted in 24 units. Surveys were distributed in the restaurants at predetermined times by the restaurant host/hostess and they collected 5,565 customer responses for Year 1 (an average of 232 per unit) and 4,338 responses for Year 2 (an average of 182 per unit). Based on results of the study it was determined that data supported the idea that human resource factors such as positive employee attitudes influence organizational effectiveness. The results showed that Year l’s outcomes account for 14% to 31% of the variance in Year 2’s organizational effectiveness. The results showed some support for the hypothesis that Year l’s unit-level employee satisfaction, organizational citizenship behavior, and turnover predict Year 2’s unit-level profitability but there was a stronger support for the hypothesis that Year l’s unit-level employee satisfaction, organizational citizenship behavior, and turnover predict Year 2’s unit-level customer satisfaction. In the reading it was noted that employee satisfaction had the only significant beta weight. Although this implies that employee satisfaction influences customer satisfaction, customer satisfaction may still affect employee satisfaction. There may be a reciprocal relationship between employee satisfaction and customer satisfaction but like all statistical results one can only conclude that data judging the relationship between employee satisfaction and organizational effectiveness is still an open question needing continued research.