Tuesday, December 24, 2019

Down Syndrome or Trisomy 21 Essay - 916 Words

People are different in so many ways, from their physical appearance to their way of thinking. We live in a word where we have to deal with diversity, unfortunately, not all people are conscious of the good manner to adopt. In buses, people avoid sitting next to them, they can also be rude and start gazing at them because of their flagrant unusual appearance. In the street, they are object to bullying and treated unequally. Their birth is more like a dramatic event than a blessing for some of their parents. I’m talking about trisomic people who are born with the Down syndrome. I am devoting this essay for this particular case of diversity because I feel really concerned with this issue that some people have regarding their behavior towards†¦show more content†¦Nevertheless, each person with Down syndrome is a unique individual that may possess these characteristics to different degrees, or not at all. Indeed, they are not clones, the physical features and medical prob lems associated with Down syndrome can vary extensively from child to child. For instance, while some kids need a lot of medical attention, others lead healthy unapproachable lives. For the mental abilities, those individuals find some difficulties regarding their development. Having said that, their IQ is less than the average one; it varies between 20 and 50. In some cases, the intellectual disability is more severe. In contrast, surprisingly, individuals with Down syndrome have better language understanding than ability to speak. â€Å"Between 10 and 45 percent have either a stutter or rapid and irregular speech, making it difficult to understand them†. (2) This syndrome affects kids ability to learn in different ways, but most have worked on with the support of their families on moderating the intellectual impairment. In fact, they can and do learn, and are capable of developing skills throughout their lives. It is only a matter of time and it is really important not to c ompare a trisomic child with a typical normal sibling. Thirdly, there are many reasons that leads to having a baby withShow MoreRelatedChildhood Development : Down Syndrome Or Trisomy 211291 Words   |  6 Pagesdisorder known as Down syndrome or trisomy 21. Down syndrome (DS) can affect multiple areas of a person’s development both mentally and physically. This paper looks at how to detect DS as early as the second trimester of pregnancy, the effects it has on early childhood development, and the effects on a person’s mental abilities. First of all, it is important to define Down syndrome. Traditionally, people are born with 46 chromosomes, 23 from the mother and 23 from the father. Down syndrome usually occursRead More Down Syndrome Essay676 Words   |  3 Pages Down syndrome is a genetic disorder in which a person is born with an extra copy of chromosome 21. There are three genetic variations that cause Down syndrome: Trisomy 21, Mosaic Trisomy 21 or Translocation Trisomy 21. There are many ways in which theses disorders affect the body.  Ã‚  Trisomy 21 occurs when an egg or sperm comes in with an extra copy of chromosome 21, then, once an embryo is formed and starts to develop, the chromosome is replicated in every single cell of the embryo. Trisomy 21 isRead More Down Syndrome Essay1249 Words   |  5 PagesDown Syndrome They used to be called Mongoloids, an ethnic insult coined by John Langdon Down, an English physician during the nineteenth century. But now they are known as people, individuals with a condition known as Down syndrome. (3). It wasnt until the 1960s that Jerome Lejeune and Patricia Jacobs discovered the cause of Down syndrome (also called trisomy 21). But with technological advancements within the scientific community, more and more information has been gathered about theRead MoreDown Syndrome As A Cognitive Disorder Essay1315 Words   |  6 Pages Down syndrome as a cognitive disorder Down syndrome (DS) is relatively well known as a genetic disorder to the general public and children with this syndrome form one of the most readily identifiable groups of children with moderate to severe learning difficulties. It has been over 130 years since Langdon Down first described DS and 30 years since the presence of the defining extra copy of chromosome 21 was identified by Lejeune and his team ofRead MoreEssay on Trisomy 2: A Gift or a Curse?878 Words   |  4 Pageschildren diagnosed with Down syndrome get viewed as lesser of a human being than an average person. Why is that? Is it because of their looks or IQ? Is appearing different really all that different? By taking a look into what Down syndrome is, how it affects them and those around them, and how it can be treated will prove they have the potential to do more than an average person. To completely understand the concept of Down syndrome, one must understand what it is. Down syndrome, the leading factorRead MoreDown Syndrom1638 Words   |  7 PagesAbstract a) Down syndrome b) Interesting topic c) Understanding why down syndrome occurs Introduction a) Who discovered Down syndrome b) What is Down syndrome Body research A. What Causes it and is it inherited? 1-Trisomy 21 2-Mosaic Down syndrome 3-Translocation Down syndrome B. How Down syndrome affects Kids 1-Physical features 2-Learning C. Risk factors 1-Advancing maternal age 2- Being carriers of the genetic translocation for Down syndrome 3-HavingRead MoreDown Syndrome Essay1483 Words   |  6 PagesDown Syndrome is a chromosomal condition related to chromosome 21. It affects 1 in 800 to 1 in 1000 born infants. People who have Down Syndrome have learning difficulties, mental retardation, a different facial appearance, and poor muscle tone (hypotonia) in infancy. Individuals with Down Syndrome also have an increased risk for having heart defects, digestive problems such as â€Å"gastroesophageal reflux or celiac disease†, and hearing loss. Some people who have Down Syndrome have â€Å"low activity ofRead MoreDown Syndrome : A Genetic Disorder1443 Words   |  6 PagesDown syndrome is a genetic disorder that affects a person’s facial features and intelligence. Down syndrome is also commonly known as Down’s syndrome, Trisomy 21, Trisomy G, 47,XX,+21, or 47,XY,+21. In 1866, English physician John Langdon Down is known as the â€Å"father† of Down syndrome. He was given that title for his publication of an accurate description of a person with Down syndrome, hence the reason why the disorder was named after him; although, this disorder was known as â€Å"Mongolism† until theRead MoreThe Medical Condition Known As Down Syndrome970 Words   |  4 Pagesaddress the medical condition known as Down syndrome. Hwan g (2013) states in his research that Down syndrome is a chromosomal translation and is responsible for various other conditions of the organs. The paper will give a general insight of the condition considering is one of the most common in the United Sates. The paper explains this by breaking it down in the following aspects: age group and family history. It also provides a description of Down syndrome, statistics and variants of the conditionRead MoreA Brief Look at Down Syndrome670 Words   |  3 Pagescause of 1 in 691 babies being born with what is known as Down Syndrome. In every cell in the human body, there is a nucleus. Typically there are twenty three chromosomes in each nucleus. Down Syndrome occurs when there is extra full or partial copy of chromosome 21. Down Syndrome is a chromosomal condition that causes low muscle tone, small stature, and a single deep crease across the center of the palm. Although, each person with Down Syndrome is a unique individual and may hav e these characteristics

Monday, December 16, 2019

The Beginning of the Civil Rights Movement Free Essays

The Beginning of the Civil Rights Movement Michelle Brown The Beginning of the Civil Rights Movement The Civil Rights Movement of the 1950s and 1960s were a profound turning point in American History. African American’s had been fighting for equality for many years but in the early 1950s the fight started to heighten, from Rosa Parks, to Martin Luther King Jr. , to Malcolm X, the fight would take on many different forms over the span of two decades, and was looked at from many different points of view. We will write a custom essay sample on The Beginning of the Civil Rights Movement or any similar topic only for you Order Now The Beginning of the Civil Rights Movement For most historians the beginning of the Civil Rights Movement started on December 1, 1955 when Rosa Parks refused to give her seat to a white person on a bus in Montgomery, Alabama. This is when the rise of the Civil Rights Movement began; however, there were several previous incidents which helped to lead up to the movement. In 1951, the â€Å"Martinsville Seven† were all African American men tried by an all white jury in the rape of a white woman from Virginia. All seven were found guilty, and for the first time in Virginia history, were sentenced to the death penalty for rape. Webspinner, 2004-2009). In this same year the African American students at Moton High decided to strike against the unequal educational treatment. Their case was later added to the Brown v Board of Education suit in 1954. (Webspinner, 2004-2009). In June 1953, a bus boycott was held in Baton Rouge, LA. After the bus drivers refused to enforce Ordinance 222, an ordinance which changed segregated seati ng on buses so that African American’s would fill the bus from the back forward and whites would fill it from the front back on a first come first serve basis, the Ordinance was overturned. Led by Reverend Jemison and other African American businessmen, the African American community decided to boycott the bus system. Later in the month Ordinance 251 was put in place, allowing a section of the bus to be black only and a section to be white only, the rest of the bus would be first come first serve. (Webspinner, 2004-2009). In May 1954, Chief Justice Earl Warren delivered the following verdict on Brown v Board of Education. We come then to the question presented: Does segregation of children in public schools solely on the basis of race, even though the physical facilities and other ‘tangible’ factor may be equal, deprive the children of the minority group of equal educational opportunities? We believe that it does†¦We conclude that in the field of public education the doctrine of ‘separate but equal’ has no place. Separate educational facilities are inherently unequal. Therefore, we hold that protection of the laws guaranteed by the Fourtee nth Amendment. † (Webspinner, 2004-2009). Even though the actual desegregation of schools did not take place in 1954, this ruling was a major step in the Civil Rights Movement which took place prior to Rosa Parks. Nonviolent Protest Movement Martin Luther King Jr. went far in his belief and commitment to nonviolent resistance. King believed, and taught, six important points about nonviolent resistance. The first was nonviolent resistance is not cowardly, â€Å"According to King, a nonviolent protester was as passionate as a violent protester, Despite not being physically aggressive, ‘his mind and emotions are always active, constantly seeking to persuade the opponent that he is mistaken. † (McElrath, 2009). His second point was that nonviolent resistance would awaken moral shame in a protestor’s opponent, which would then lead the opponent to understanding and friendship. Kings third point was nonviolent resistance was a battle against evil not a battle against individuals. His fourth point stated that su ffering was required in nonviolent resistance, â€Å"Accordingly, the end was more important than safety, and retaliatory violence would distract from the main fight. † (McElrath, 2009). King’s fifth point was, the nonviolent resister was on the side of Justice. His final point was the power of love rests with nonviolent resisters, this is the love of understanding not of affection, â€Å"Bitterness and hate were absent from the resister mind, and replaced with love. † (McElrath, 2009). King continued to preach nonviolent resistance through all the boycotts, sit-ins, protest marches, and speeches. After being arrested in the Montgomery, Alabama bus boycott of 1963, he wrote letters from the Birmingham jail about nonviolent resistance. Later in 1963 he led a massive march on Washington DC, this is where he delivered his I Have A Drams speech. In 1964 he was awarded the Nobel Peace Prize for his efforts. Up until his assassination in April 1968, â€Å"he never wavered in his insistence that nonviolence must remain the central tactic of the civil-rights movement, nor in his faith that everyone in America would some day attain equal justice. † (Chew, 1995-2008). Malcolm X Malcolm X, whom at one time was a minister for the Nation of Islam, had a more militant style to attain rights for African Americans. After the Washington DC march he did not understand why African Americans had been so excited about a demonstration, â€Å"run by whites in front of a statue of a president who has been dead for a hundred years and who didn’t like us when he was alive. (Adams, 2009). Malcolm, to the protestors, represented a militant revolutionary who would stand up and fight to win equality, while also being a person who wanted to bring on positive social services and was an exceptional role model. In fact, it was the customs of Malcolm X which were severely rooted in the academic founda tions of the Black Panther Party. Malcolm X was murdered in 1965, but his beliefs lived on for long after. Conclusion While King and Malcolm X never shared the same platform, and had two very different beliefs in how to end segregation and racisms, they were both key players in the Civil Rights Movement. Martin Luther King Jr. preached nonviolent resistance, and Malcolm X had a militant style to his beliefs. After Malcolm X was murdered, King wrote the following to his widow, â€Å"while we did not always see eye to eye on methods to solve the race problem, I always had a deep affection for Malcolm and felt that he had a great ability to put his finger on the existence of the root of the problem. † (Adams, 2009). References: Adams, R. (2009) Martin and Malcolm, Two 20th Century Giants. Retrieved on September 27, 2009, from http://www. black-collegian. com/african/mlk/giants2000-2nd. html Chew, R. (1995-2008) Martin Luther King, Jr. Civil-Rights Leader, 1929 – 1968. Retrieved on September 27, 2009, from http://www. lucidcafe. com/library/96jan/king. html McElrath, J. (2009) Martin Luther King’s Philosophy on Nonviolent Resistance, The Power of Love. Retrieved on September 27, 2009, from http://afroamhistory. about. com/od/martinlutherking/a/mlks_philosophy_2. htm Webspinner. (2004-2009) We’ll Never Turn Back History Timeline of the Southern Freedom Movement. Retrieved on September 27, 2009 from http://www. crmvet. org/tim/timhome. htm How to cite The Beginning of the Civil Rights Movement, Papers

Sunday, December 8, 2019

Pekeliling flats of kuala lumpur Essay Example For Students

Pekeliling flats of kuala lumpur Essay Outline1 3.1 Introduction to Case Study2 3.2 Assembling Method3 3.3 Evaluation and Comparison4 3.3.1 Cost5 3.3.2 Speed6 3.3.3 Labour Requirement7 3.3.4 Quality8 3.3.5 Productiveness9 Aims10 Description of Datas11 Rationale for Combining Data Points12 Result and Discussion13 Comparison of Labour Productivity between Structural Building Systems14 Cycle Time Comparison between Structural Building Systems15 Summmary16 3.3.6 Wastage17 3.4 Decision 3.1 Introduction to Case Study Pekeliling Flats is situated on the Lebuhraya Mahameru-bound Jalan Tun Razak, Kuala Lumpur. The flats are besides known as Tunku Abdul Rahman public flats. Pekeliling flats are one of Kuala Lumpurs earliest public lodging undertakings and were built in 1967. There were 11 residential blocks consisting 2,969 units. For the building of the first pilot undertaking, the Government held a dialogue with a joint venture company Citra/Boon A ; Cheah which intended to utilize the Gallic Tracoba System of building. But the dialogue was unsuccessful and the undertaking was later opened to public stamp. The stamp was finally awarded to Gammon/Larsen Nielsen utilizing the Danish System of big panel industrialised prefabricated system. The building was so launched in 1968. The strategy at Jalan Pekeliling comprises 4 blocks of 4-storey flats and stores, 7 blocks of 17-storey flats, numbering 3009 units and was completed within 27 month, including the clip taken in the building of the RM 2.5 million projecting pace for the prefabricated elements at 10? stat mis Jalan Damansara. The whole building of the level is constructed utilizing the prefabrication of concrete box method which is similar to the British Truscon System whereby a standard through-shaped concrete boxes, which incorporate frontage walls made from lightweight stuffs, ceilings dwelling of plaster embarkation stapled and every bit good as internal adjustments. 3.2 Assembling Method The boxes are made by precasting the walls panels with ribs downwards and smoothing down the concrete as it has semi-set. Once the walls have hardened, they are so removed from the molds by agencies of an overhead gauntry and placed into a gigue. Foundations tablets are cast and on top of these precast concrete beams, inverted T cross subdivision are so laid. The boxes are now unloaded straight from the lorry and are placed in place upon these inverted T beams. Once the boxes, which form one class from frontage to fa amp ; ccedil ; ade have been bolted together along the wall. After the boxes are connected together at the structural floor degree of two connection home bases, which are bolted with bolts to weave inserts on either side of the joint. Once the boxes, which form one class from frontage to fa amp ; ccedil ; ade have been bolted together along the wall, where once more the cast-in sockets which joined by steel home bases and bolts, there merely thin perpendicular articulations seeable. The perpendicular channels between the bordering ribs of the end-to-end boxes make stiff cross-walls of singular sound insularity. Horizontal support rods are lowered and concrete is cast-in, ensuing in the formation of a party wall. 3.3 Evaluation and Comparison 3.3.1 Cost Industrialised prefabricated building of the Pekeliling undertaking was more expensive than the conventional system. Its cost was 8.1 % higher than a conventional lodging undertaking completed around that clip. There are many advantages you can acquire if you are utilizing precast building method alternatively of utilizing unmoved building method. One of the advantages is the cost deduction. Using precast building method can salvage money or cut down the cost of building because: Time Mass production every bit good as off-site production shortens project timeline, gives earlier return on investing, leting earlier tenancy and maintain in agenda. It is estimated that a precast construction takes up to 20 % less clip to build than a similar dramatis personae in situ construction. For illustration, the walls of a edifice can be manufactured while on-site foundations are being built. If the clip is over the agenda or due day of the month, the company needs to pay the amendss. Lastingness Provides long service for high usage applications and does non necessitate regular care ; salvage cost in long term. Waste Minimization Fewer stuffs are required because precise mixture proportions and tighter tolerances are accomplishable. Less concrete waste is created due to tight control of measures of constitutional stuffs. Waste stuffs are more readily recycled because concrete production is in one location. Sand and acids for completing surfaces are reused. Steel signifiers and other stuffs are reused. Decreased demands for formwork, entree staging and less trust on moisture trades. The decreased demand for site supervising by the chief contractor besides saves money. Compared to project unmoved concrete, the undermentioned per centums of nest eggs can be expected: 75 per cent in footings of formwork and staging and 90 % for wet concrete. Reclaimable precast concrete constructions in urban countries can be recycled into fill and route base stuff at the terminal of their utile life. Fewer trucks and less clip are required for building because concrete is made offsite ; peculiarly beneficial in urban countries where minimum traffic break is critical. When fewer trucks are required agencies lesser the cost needed to lease vehicles. The decrease in site labour which partially offsets a deficit of skilled site workers. Priced in the $ 55 $ 65 scope per additive pes of wall, precast systems are competitory with other foundation walls, peculiarly when costs are examined as an assembly that includes termss and sub-slab drainage. Precast walls can be installed rapidly in any conditions. Because the concrete is cured in the mill, precast foundations can be backfilled every bit shortly as the slab is placed and first floor brace is in topographic point, heightening jobsite safety and site handiness. Door and window gaps, steel beam pockets, and brick shelfs must be cast into the panels, so orders must be customized. The wall sub-base must be compacted and leveled, similar to preciseness required of termss. In doing cost comparings between alternate systems, it is imperative that entire like for like costs are considered. There are significant nest eggs to be made utilizing precast building which are non apparent when a direct elemental cost comparing is made with alternate building methods. To acquire an accurate like for like cost, whole edifice costs must be estimated. To accurately assess whole edifice cost, each of the advantages of precast must be accurately costed. Savingss through factors such as earlier completion day of the months, built-in fireproofing, reduced formwork, staging, reduced wet trades and increased budget control can be important. Besides, fast-track procurance and building may understate capital costs by cut downing funding costs and procuring earlier rental income. The precast frame bundle typically includes columns, beams, floors, wall panels, stepss, landings, balconies etc. , all of which have an built-in minimal one-hour fire protection. Specialist precast frame manufacturers will help design squads in measuring the range for standardised precast constituents for a peculiar undertaking. Budget costings and hard-on programmes can be prepared by the precaster on reception of outline drawings and a list of public presentation standards. For contractors and specifiers, there is a large difference between monetary value and cost. While monetary value is but one component of cost, it is the initial, most seeable and the easier of the two to understand. Concentrating on monetary value is non a preferable scheme in any concern, particularly where high-quality, dependable manufactured goods are concerned. Alternatively, for precast concrete merchandises, the focal point should be on the Entire Cost of Ownership ( TCO ) . How is TCO calculated? The Entire Cost of Ownership is equal to the amount of the four cost constituents: quality, service, bringing and monetary value. In footings of cost elements, a distinguishable advantage of precast concrete over cast-in-place ( CIP ) is speed of bringing and easiness of installing, or service. Both contribute straight to take down Total Cost of Ownership. Precast concrete, particularly when produced in certified workss, boasts the extra benefit of higher quality. Controlled batch proportions placed under unvarying conditions systematically creates a better merchandise than can be cast in topographic point. For illustration intents, we will utilize an ordinary precast belowground construction. On the building site, programming is an of import, unpredictable and expensive hazard. Nature stacks the cost odds against CIP concrete. It is a much quicker and less hazardous pick to hold the precast delivered and installed the same twenty-four hours instead than unearth, signifier, pour and deprive the CIP concrete, remedy, moist cogent evidence and backfill. Given the cost matrix in Example 1, it could salvage six yearss in building programming. For illustration intents, see a typical below-grade construction. For the inside dimensions of a 4-by-8-by-4-foot construction of either precast ( 6 inches thick ) or CIP ( 8 inches thick ) , assume these facts: CIP requires three separate yearss to pour the base, walls and top. Bring arounding and depriving adds one twenty-four hours to each measure, necessitating six yearss to project on site. Damp proofing adds one twenty-four hours to the CIP procedure, numbering seven working yearss of open-hole clip. Leting an mean $ 350 per three-dimensional pace for little occupations ( ready mix, rebar, mastic, labour and equipment ) , the 4 three-dimensional paces of CIP required monetary values out at $ 1,400. Precast takes merely one twenty-four hours to present and put in, including backfill of the pre-damp proofed unit. The precast option to bring forth and truck to the occupation site monetary values out at $ 2,000. Installing the precast requires a four-hour minimal charge of $ 400 for a 20-ton Crane. Hardware costs for adjustments, embedded points, etc. , are indistinguishable for precast and CIP. The TCO of precast is fixed at $ 2,400. However, the TCO of CIP is merely get downing at $ 1,400. That raises the inquiry among skeptics as to why a contractor would pass more for precast. But many savvy contractors and specifiers recognize that their costs are really less with precast. Career Goals EssayComparison of Labour Productivity between Structural Building Systems This subdivision evaluates the labour productiveness comparing between structural constructing systems. Table 4 presents the descriptive statistic for labour productiveness comparing between undertakings while Table 5 presents the labour productiveness comparing between constructing systems utilizing the mean information from the seven undertakings. Analysis of Variance ( ANOVA ) consequences of labour productiveness between the four edifice systems was found to be statistically important different as shown in Table 6. The precast concrete system was the most productive edifice system with labour productiveness of 1.33 manhours/m? followed by dramatis personae unmoved half tunnel ( 1.88 manhours/m? ) , cast unmoved table signifier ( 2.70 manhours/m? ) and conventional edifice system ( 4.20 manhours/m? ) . Taking the conventional edifice system as the benchmark of 100 % , the dramatis personae unmoved table signifier system achieved a building velocity of 135 % followed by the dramat is personae unmoved half tunnel signifier system of 155 % and precast concrete system of 168 % . The consequence was in tandem with the figure of trades for each edifice system. For illustration, the conventional edifice system consisted of four major operations, viz. the hard-on of scaffolding and formwork, installing of support bars, casting of concrete and dismantlement of scaffolding and formwork ; hence, it was extremely labour intensifier. However, merely a few building operations are needed for industrialised edifice system. For case, the precast concrete system was pre-assembly in mill, therefore cut downing on-site labor input. Besides that, no staging is required for cast unmoved tunnel signifier system to back up the slab. Cycle Time Comparison between Structural Building Systems This subdivision examines the rhythm clip measured in yearss required to finish the structural component of one unit house. Table 7 shows the rhythm clip for each undertaking while Table 8 shows the mean rhythm clip for four structural constructing systems. In Table 9, analysis of discrepancy ( ANOVA ) consequences indicated that there was important different between the four edifice systems in term of rhythm clip per house, . The average rhythm times were 4.9 yearss for conventional edifice system, 3.9 yearss for cast unmoved tabular array signifier, 2.9 yearss for cast unmoved half tunnel signifier and 2.3 yearss for precast concrete system. In term of per centum, the conventional edifice system required 26 % more rhythm clip than cast unmoved tabular array signifier system, 41 % of cast unmoved half tunnel signifier system, and 53 % of precast concrete system. Summmary This survey has introduced the standardised informations aggregation methodological analysis for mensurating and comparing the edifice structural component of conventional and industrialized edifice system. Research workers are enabled to unite informations points from assorted undertakings to bring forth a larger database if they adopt this methodological analysis. The rational for uniting the information point is that the bulk of residential undertakings has a simple structural layout program and do non hold any distinctive features architectural characteristics. A sum of 499 labour productiveness informations points were obtained from seven ongoing residential undertakings. The consequences and treatment evolves on comparing between structural constructing systems in footings of labour productiveness and rhythm clip per structural component of one house. In facet of labour productiveness comparing, the precast concrete system was the most productive edifice system with labour productiveness of 1.33 manhours/m? compare to the conventional edifice system with labour productiveness of 4.20 manhours/m? . Besides that, presuming the conventional edifice system as the benchmark of 100 % , precast concrete system achieved a building velocity of 168 % . For case, when the first house constructed utilizing conventional edifice system is merely done, whereas the first house constructed utilizing precast concrete system was done and the 2nd house is constructed about 68 % completed. The comparing of rhythm clip per house indicated that the two edifice systems were significantly different. The average rhythm times were 4.9 yearss for conventional edifice system and 2.3 yearss for precast concrete system. In term of per centum, the conventional edifice system required 53 % more rhythm clip than precast concrete system. Finally, the precast concrete system is preferred comparison with conventional edifice system because the edifice plants can be done faster since it requires less clip to build wholly. 3.3.6 Wastage In the field of structural concrete building, two basic constructs are by and large applied in pattern which is precast concrete building and conventional dramatis personae unmoved building. Wastage can be defined as goods that are damaged, out of day of the month, reduced, or by and large unsalable, which are destined to be thrown off and which are written off as a loss. Construction waste can be divided into three chief classs viz. material, labour, and machinery waste. However, material wastage is given more concern because most of the natural stuffs used in building industry come from non-renewable resources. The building industry is a major generator of waste stuff. Traditionally, building waste is defined as any stuff, apart from Earth stuffs, which need to be transported elsewhere from the building site or used within the building site itself for the intent of land filling, incineration, recycling, recycling or composting, other than the intended specific intent of the underta king due to material harm, extra, non-use, or non-compliance with the specifications or being a byproduct of the building procedure. Material building waste can be classify as natural waste ( ineluctable waste allowed for in the stamp ) , indirect waste ( stuff used for intents other than that for which it was ordered ) , and direct waste ( stuff which is encountered for ) . It is by and large known that there is a comparatively big part of the stuffs being wasted because of hapless stuff control on edifice sites. The job of stuff wastage is non an stray issue on building sites. It is besides an environmental concern. When building waste is viewed from an environmental position a different type of classification must be considered. The environmental standards include the consideration of solid waste. This is divided into four classs: risky waste production, non-hazardous waste production, inert waste production, and radioactive waste production. The building industry consumes a important sum of edifice stuffs and produces big measures of edifice waste. Construction and destruction ( C A ; D ) stuff is a mixture of inert and non-inert stuffs originating from building, digging, redevelopment, destruction and roadwork. The composing of building waste is divided into two major classs: inert stuffs ( soft and difficult inert stuffs ) and non-inert waste. The soft inert stuffs ( such as dirt, Earth and slurry ) can be reused as fill stuffs in renewal and Earth make fulling plants. The difficult stuffs ( such as stones and interrupt concrete ) can be reused in renewal plants and/or recycled for building work as farinaceous stuffs, drainage bedclothes beds and concrete application. The non-inert waste ( such as metal, lumber and packaging waste ) can be recycled or disposed of in landfills. Further, it was shown that there is a noticeable difference in the coevals of material waste between pre-cast and in situ.In general, any decrease in on-site concreting leads to blow decrease. Precasting and prefabrication therefore offers important chances for the decrease of waste. In comparing, the wastage in using precast concrete building method has smaller sum than conventional cast unmoved building method.The chief ground behind this may be due to the negligible wastes arisen during transit and installing at the site. The pre-cast concrete elements transported to the site were stored unit wise by makers themselves to avoid harm to the elements. Hence the waste originating during transit had been minimized and identified as nothing. Since pre-cast elements were supplied harmonizing to the needed length, waste originating during installing of elements was at a minimal degree and waste happening due to over ordination of stuffs was besides eliminated. Further, the pre-cast elements were produced at mills under proper supervising utilizing steel molds which can be formed of different sizes. Therefore, the wastage of stuffs during fabrication besides reduced to a considerable sum. On the other manus, there have larger sum of wastage due to conventional unmoved building than precast concrete building. This big measure of wastage for conventional unmoved building was identified due to the deficiency of supervising, inaccurate commixture methods, inappropriate type of equipment used, hapless storage of stuffs and hapless quality craft and this led to higher waste of stuffs in the undermentioned ways: extra cement being used to speed up the hardening procedure extra concrete being used due to the breakage of signifier work higher waste in theodolite and handling of metal and sand and extra concrete being used in uneven surfaces The aim of an unmoved method is to extinguish and cut down the traditional site-based trades like traditional lumber formwork, brickwork, stick oning and to cut down labour content. Conventional dramatis personae in situ building method utilise lightweight prefabricated formwork made of steel, fibre glass or aluminium in order to replace the bing conventional lumber formwork. The method is suited for big Numberss of lodging units that require insistent use of formwork. The formwork can be reused as many times as possible with minimum wastage. There is a noticeable difference between the waste of pre-cast building and in situ building. However, because of the important differences in other stuff wastes, it can be said that there is a important decrease of stuff wastages of pre-cast concrete compared to the stuff wastages of conventional unmoved concrete. Therefore it can be concluded that there is a important waste decrease when pre-cast concrete is used. 3.4 Decision From the surveies that have been done, it can be concluded that the precast method is better than the conventional Cast-In-Situ method in footings of cost, velocity, labor, quality, wastage and productiveness.

Saturday, November 30, 2019

Kubla Khan Essays - British Poetry, Samuel Taylor Coleridge

Kubla Khan Kubla Khan If a man could pass thro' Paradise in a Dream, & have a flower presented to him as a pledge that his Soul had really been there, & found that flower in his hand when he awoke -- Aye! and what then? (CN, iii 4287) Kubla Khan is a fascinating and exasperating poem written by Samuel Taylor Coleridge (. Almost everyone who has read it, has been charmed by its magic. It must surely be true that no poem of comparable length in English or any other language has been the subject of so much critical commentary. Its fifty-four lines have spawned thousands of pages of discussion and analysis. Kubla Khan is the sole or a major subject in five book-length studies; close to 150 articles and book-chapters (doubtless I have missed some others) have been devoted exclusively to it; and brief notes and incidental comments on it are without number. Despite this deluge, however, there is no critical unanimity and very little agreement on a number of important issues connected with the poem: its date of composition, its meaning, its sources in Coleridge's reading and observation of nature, its structural integrity (i.e. fragment versus complete poem), and its relationship to the Preface by which Coleridge introduc ed it on its first publication in 1816. Coleridge's philosophical explorations appear in his greatest poems. 'Kubla Khan', with its exotic imagery and symbols, rich vocabulary and rhythms, written, by Coleridge's account, under the influence of laudanum, was often considered a brilliant work, but without any defined theme. However, despite its complexity the poem can be read as a well-constructed exposition on human genius and art. The theme of life and nature again appears in 'The Rime of the Ancient Mariner', where the effect on nature of a crime against the power of life is presented in the form of a ballad. 'Christabel', an unfinished 'gothic' ballad, evokes a sinister atmosphere, hinting at evil and the grotesque. In his poems Coleridge's detailed perception of nature links scene and mood, and leads to a contemplation of moral and universal concerns. In his theory of poetry Coleridge stressed the aesthetic quality as the primary consideration. The metrical theory on which 'Christabel' is constructed helped to break th e fetters of 18th-century correctness and monotony and soon found disciples, among others Walter Scott and Lord Byron. Opium and the Dream of Kubla Khan Coleridge's use of opium has long been a topic of fascination, and the grouping of Coleridge, opium and Kubla Khan formed an inevitable triad long before Elisabeth Schneider combined them in the title of her book. It is tempting on a subject of such intrinsic interest to say more than is necessary for the purpose in hand. Since the medicinal use of opium was so common and wide-spread, it is not surprising to learn that its use involved neither legal penalties nor public stigma. All of the Romantic poets (except Wordsworth) are known to have used it, as did many other prominent contemporaries. Supplies were readily available: in 1830, for instance, Britain imported 22,000 pounds of raw opium. Many Englishmen, like the eminently respectable poet-parson George Crabbe, who took opium in regular but moderate quantity for nearly forty years, were addicts in ignorance, and led stable and productive lives despite their habit. By and large, opium was taken for granted; and it was only the terrible experiences of such articulate addicts as Coleridge and Dequincy that eventually began to bring the horrors of the drug to public attention. Coleridge's case is a particularly sad and instructive one. He had used opium as early as 1791 (see CL, i 18) and continued to use it occasionally, on medical advice, to alleviate pain from a series of physical and nervous ailments. But the opium cure proved ultimately to be more devastating in its effects than the troubles it was intended to treat, for such large quantities taken over so many months seduced him unwittingly into slavery to the drug. And his life between 1801 and 1806 (when he returned from Malta) is a somber illustration of a growing and, finally, a hopeless bondage to opium. By the time he realized he was addicted, however, it was too late. He consulted a variety

Tuesday, November 26, 2019

The Death Penalty Debate

The Death Penalty Debate Introduction:Capital punishment, or the execution of a person by the state as punishment for a crime, has traditionally played a major role in society's criminal justice system. However since the later half of 20th century, most countries in the world have abolished the death penalty completely or in practice.Capital punishment is mainly a communal ethical issue, as there are at least two viewpoints and it is the concern of the wider community and organizations. Aspects of social justice, equality, individual rights and freedom and general welfare of various stakeholders, mainly the criminal, are also embedded in the debate, confirming the community principle it entails. This issue of moral concern also contains aspects of personal ethics, because of the cultural attitudes values and beliefs that influence our viewpoint on it.History:Although most societies have different beliefs about punishment depending on their beliefs, in the middle ages, a life could depend upon the whim of a k ing.An ad from the Ecologist Green Party in Mexico pro...In Australia, during the initial years of colonisation, our criminal justice system replicated that of the British. This meant that the death penalty was available for trivial crimes such as burglary, sheep stealing, forgery and sexual assaults. The death penalty was legal until 1985, with Queensland as the first state to abolish it, our last hanging in September 1913. According to the Death Penalty Abolition Act of 1973, no states or territories in Australia can prescribe the death penalty, which makes life imprisonment the most severe sanction available.As of 2008, 112 countries have abolished the death penalty with only 83 countries retaining it. There are only four countries which have reintroduced the death penalty - Nepal, Philippines, Gambia and PNG. However, since violent crimes are not fit to live in a civilized society, the debate of capital punishment is a controversial and ongoing...

Friday, November 22, 2019

What Will Current Politics Do To Writers

What Will Current Politics Do To Writers I just declined a podcast with a very well-known  writers entity because they wanted me to delve into the dynamics of current politics and how it will affect writers when it comes to grants. I did not want to open that can of wormsthe forecasting and judgment of liberal versus conservative thinking. They suggested that it was just factual, and I said in this environment, where being factual is still considered taking sides, I could not participate. Sad, but tempers are running still too hot right now.   Frankly, if National Endowment funds were completely cut off, Im not sure the average writer would be able to tell, because more of their funds go to dance, art, music, and such. Yes, funds would be cut off from nonprofit retreats, some school creative writing projects, and individuals whove earned fellowships (about 50 per year, half of which are translations of works). But . . . the average writer attempting to earn a living would not feel it. Because  the average writer doesnt live off grants. New writers cant qualify. However, the successful writer, whether new or seasoned, is a scrappy writer. I did not want to get into that conversation about current politics, just like I refuse to discuss writers block. Who needs an excuse to feel like less of a writer? Just like you write through writers block, you pitch and submit to  whatever market is out there, with or without financial support. It doesnt change who you are as a writer. You are still writing. You are still being creative. When you decide the power is yours, not theirs (regardless who they are), you will thrive.

Wednesday, November 20, 2019

Interactive and Digital Marketing Essay Example | Topics and Well Written Essays - 2250 words

Interactive and Digital Marketing - Essay Example It has huge ware houses with lots of products and the inventory keeps getting updated at a quick rate in each week. It targets the market of UK, France, Germany, Spain, Italy, Australia, USA, China and also distribution channel to other countries as well which are all generally been controlled by the two distribution channel of the company in UK. The website used by the company is very accessible and useful for all the customers. The website provides good facility for all its customers to find their product as per their needs and buy the particular product. It has the facility of different section of apparels for men’s and women’s which enables the customers to easily go into the section of their choice. The company website has got the user friendly navigation platform which helps the customers to access and buy products easily and comfortably. The best facility that the company provides is that people can access to the site and buy products from their own social networ king site like facebook without even going out of the social networking site. The company website is made based on the standards of worldwide consortium and also national institute of blind using plain English so that it is easily understandable to the customers. The website is updated on timely basis based on the new browsers and software (ASOS, 2014). Primark was opened in 1969 in Dublin, Ireland and came up with 38 stores. The company has opened 161 stores currently with its first store in Derby in 1973. The company is an Irish apparel retailer which operates in Belgium, France, Spain, Portugal, Netherlands, Austria, Germany, Ireland and UK. The company offers variety of products like men’s and women’s beauty products, confectionery and also kids’ apparel of large variety. The company website provides variety of options to access where every section is been divided like the women, men’s, kids and home, it also provides the facility to set the budget price of the customers so that

Tuesday, November 19, 2019

Serbian and Kyrgyz Essay Example | Topics and Well Written Essays - 1000 words

Serbian and Kyrgyz - Essay Example Like the European monarchs after 1848, post- Soviet most influential men (Gordy 78-80; Jennings) have grown massively concerned concerning the transnational increase of revolution. Most states have already begun taking countermeasures to strip off such likelihood (Woodward 123; Gordy 78-80). These democratic revolutions which have occurred amongst the post-communist (Woodward 123) nations present a challenge for social science hypothesizing, due to the cross situation motivations that in part drive their spread violate the conjecture of the independence of cases that lies at the basis of much social scientific analysis both analyses based on the Millian (Woodward 123) method, as well as those statistical analyses that rely on the thought and notion that the result of each throw off the political dice (Woodward 123) is independent of the results of prior throws. With each iteration the shape has adjusted somewhat as it faces the reality of local factors (Woodward 123; Kapatadze, 186). However its main features have revolved around six elements: the use of stolen elections as the occasion for massive mobilizations against pseudo-democratic regimes; foreign support for the development of local democratic movements (Woodward 123); the organization of extreme youth movements using unorthodox protest (Kapatadze, 186) tactics preceding the polls in order to undermine the regime’s popularity (Kapatadze, 186) and will to suppress and to arrange for a final showdown (Woodward 123; Kapatadze, 186); a united opposition established in part through foreign prodding; external diplomatic pressure and unusually large electoral monitoring; and (Woodward 123); massive mobilization upon the announcement of fraudulent electoral results and the use of non- violent resistance tactics taken directly from the work of Gene Sharp, the guru of non-violent

Saturday, November 16, 2019

Investigation and prevention of child abuse Essay Example for Free

Investigation and prevention of child abuse Essay Child abuse is one of the most common human right violations in the human community. According to UNCEF reports, at least one child dies every day as a result of child abuse. Nevertheless, the exact number of child deaths as a result of chilled abuse is hindered by the fact that most cases of child deaths in the communities goes without investigation. Still, an estimated over 30, 000 child in America are put under protection registers annually (American Prosecution Research Institute 12). The world is approximated to have over 100 million abused children most of whom are from the developing nations. However, these are just but a few of child abuse cases which are reported. There are numerous types of child abuse practices in the society. They range from infant shaking, to child battering, to child negligence, to child sexual abuse (Perona, Bottoms, and Vieth 42). Such practices are blamed for having a negative impact on the psychological, physical, and social development of the child. The process of preventing child abuse effectively dictates for the participation of all members of the community in ensuring early identification of such problems (Perona, Bottoms, and Vieth 51). Creating awareness among children as well as other members of the community on the rights of children plays an instrumental role in mitigating child abuse in the community. Investigation of child abuse for legal charges involves numerous methods depending on the type and magnitude of the act. For instance, Physical sexual abuse on kids can either be self evident or call for medical procedures to qualify (Myers 46). The history of family relations plays a crucial role in child abuse investigation practices by law enforcement. This paper seeks to give a detailed discussion on the prevention and investigation of child abuse. Prevention of child abuse The realization of effective child abuse prevention entails the creation of awareness on the forms and how to identify child abuse as well as engaging efforts in mitigating risk factors to child abuse in the community (Perona, Bottoms, and Vieth 51). The problem of child abuse in the community is increased by lack of awareness by the public. According to available statistics for example, child sexual abuse incidences in the American society has its peak between the onset of puberty and adolescence (American Prosecution Research Institute 67). At this age, most of the victims have mental capability to identify and avoid being victims of sexual abuse. Based on this reasoning, creating adequate awareness among the child can greatly aid in mitigating child abuse practice in the community. In this pursuit, since 1983, the American nation has set April as Child Abuse Prevention Month. In addition, the government funds child-abuse prevention initiatives through its Community-Based Grants for the Prevention of Child Abuse and Neglect (CBCAP) (American Prosecution Research Institute 71). Still on awareness, society cohesion factor plays an important role in preventing child (Myers 102). Just as is the case with other forms of crime prevention practices, the local community members find much advantage in the early identification and thus controlling of child abuse in their neighborhoods. This is because; they are the ones living close to the sources of child abuse; the family unit. Therefore, encouraging communal living among members of the community is instrumental in preventing child abuse. Another common practice of preventing child abuse is enforcing zero tolerance policies on domestic violence. According to numerous research findings, it is established that families marked with couple violence have a higher prevalence of child abuse; both physically and emotionally (Myers 112). There are numerous risk factors identified for causing child abuse practices. Such include the social and economic position of the family and the local community and the effectiveness of the law enforcement and criminal justice system in addressing child abuse cases (Myers 108). It has been sufficiently claimed that poverty in the in family plays an instrumental factor in perpetuating child abuse practices such as neglect. Financial constrains in the family also serves to cause stress and depression among parent, a factor that negatively impacts on the parent-child relationship. Still, an idle mind is the devils workshop. Therefore, the sustainable prevention of child abuse dictates for resolving eminent economic problems that affecting members of the local community. Also found to influence the level of child abuse is the character and gender of the child. Statistical evidence indicates that most mentally or physically disabled children are found to be at risk of child abuse (Perona, Bottoms, and Vieth 61). The statistics also establish substantial evidence connecting gender classification as a clear cause of child abuse in the community. In particular, the girl child is prone to child abuse and neglect. In some countries, the girl child is rarely provided with equal education as the boy child. This is a denial of the children right to education. In addition, girls are common victims to forced marriages as well as prostitution. On the other hand, the boy child is commonly marked with forced labor. They are also the main victims of corporal punishment compared to girls. These are the reasons behind the enactment and enforcement of numerous laws mitigating practices such as child pornography and prostitution and child labor (American Prosecution Research Institute 61). Many nations across the globe including the United States have banned corporal punishment of children both in the schools and in the family. This serves the purpose of limiting inhuman punishments on children. It is worth noting that the United Nations initiative on providing basic education to all children has taken cause in many countries across the globe. This initiative is quite instrumental in influencing prevention of anti-girl child education practices in the nation as much of its emphasis is on the girl child. Such an initiative is supported by numerous community based non-governmental organization. The war on drug abuse in the community is an important tool in mitigating child abuse. According to numerous research findings, drug abuse like alcoholism is among the leading causes of domestic violence as well as child sexual abuse in the community (Myers 121). This claim has been evidently supported by the fact that drugs function in compromising the reasoning and judgmental ability of an individual. Such can also be explained by the fact that drugs enhance aggression and temper of the victim. In a move to mitigate this practice of drug abuse as a potential cause for child abuse, the American nation is found to spend billions of dollars on the war against drugs in the society. Investigation of child abuse The process of investigating child abuse is heavily dependent on the availability of claimed evidence of physical, emotional, and/or sexual abuse. According to the available legal provisions; some forms of child abuse such as child labor neglect, and failure to provide education as well as some forms physical abuse of are self evident (Myers 132). Therefore, the investigation of such practices is mainly based on reported evidence rather than dictating for medical procedures. Also commonly involved in the investigation practices is identifying the history of a child abuse problem. According to psychological principles, human problems such as emotional stress and depression are a direct result of long term abuse in children. Based on this, the period over which such abuse on a particular child should be provided as qualifying evidence against the defendants charged with child abuse (Myers 136). As per the requirements of the law, this evidence can be self proclaimed by the victim, given by a member of the local community or from a doctor. The doctor should conduct an extensive diagnosis on the subject matter to give admissible evidence. Such might cover but not limited to the extent of physical or emotional damage impacted on the child. However, the laws protecting children against abuse prohibit against even the slightest form of action against a child that can compromise their social, emotional, academic and physical development. Conclusion Despite the numerous efforts by governments and other community-based non-governmental organizations, child abuse remains a major problem in the global community. There are various causes of child abuse in the community. Such include; domestic violence, cultural beliefs, economic status, and social practices in the community. The problem is further complicated by failure by the victims or members of the community to reports such cases to the relevant authorities for legal action against the perpetrators. Therefore, effective child abuse prevention strategies must take into account the need for creating public awareness on the problem. It should also devote much emphasis on resolving the main social and economic factors which promote child abuse. It is only through consulted cooperation among all stakeholders in the community that the war against child abuse can be successful.

Thursday, November 14, 2019

Classification Essay - PTA Personalities -- Classification Essays

PTA Personalities Many public institutions rely on the generosity and help of volunteers in order to run smoothly. One of the more important institutions is the school, and one of the most visible volunteers in the school is the PTA volunteer. These volunteers fulfill a necessary role, especially for the elementary schools, by augmenting the work of the principal and teachers with extras that the school ordinarily would not have. The people who do the volunteer work are varied, but the PTA seems to act as a magnet for three types of personalities: the power seeker, the eager beaver, and the dependable worker. Dominating Dora, the "power seeker," usually starts off as a committee chairman and almost always ends up as the PTA president. She feels she must run the PTA her way because only she knows the best way to do it. She calls board meetings often and is incensed and hurt if someone misses the meeting. All jobs must be done her way, and she frequently organizes half of the job before it is delegated. She then checks up to see if it is being done precisely as she organized it. On the other hand, she may not delegate anything at all, preferring to do most of the work herself. Not delegating the work ensures that it will be done properly, namely her way. Dominating Dora usually follows an unacknowledged personal agenda to gain status, prestige, influence, and authority; she often has no idea that she is following a personal agenda. The school personnel are wary of her since she is very bossy in her dealings with everyone. She even goes so far as to tell the principal and teacher s how to go about their own jobs. Dominating Dora also promotes programs within the PTA that the principal often ... ...ant to what she is doing. The "dependable worker" like Normal Nancy does not burn out because she paces herself, works steadily, and fills in the gaps where needed. Doras and Ritas may come and go, but Nancys "keep going and going and going." The interesting thing about the "power seeker," the "eager beaver," and the "dependable worker" is that they are all necessary to run the PTA organization. Their quirks are what make them important in getting the activities planned, the prizes made, the playground equipment ordered, and the book fair organized. Another noteworthy fact is that, when necessary, any PTA volunteer can become any one of these three types of people. The fact that a "power seeker," an "eager beaver," and a "dependable worker" can fit together like a puzzle to form a bigger picture is the miracle of the PTA volunteer organization.

Monday, November 11, 2019

Digital Libraries Essay

Ashley Tipton Digital Libraries Introduction A simple definition of a digital library is a library where collections are stored in digital formats instead of physical formats and accessible via computers. The content can then be accessed locally, as in within a library, or remotely such as from other places on a college campus or from a user’s home. Many people believe that digital libraries are the future. There are also those that still hold on to the thought that the traditional brick and mortar building is the way a library is supposed to be and that moving to the digital real is not the direction a library should be heading. History The information revolution is one of the marvels of the 20th century. We are now living in an information society where almost everyone around us has a computer, a smartphone, and is connected to the Internet. As our society has become more and more connected, libraries have also started to become more digitized. Library automation came into popularity in the early 1950s. It started with punched card applications to library technical services operations. In 1965, Licklider coined the phrase â€Å"library of the future† to refer to his vision of a fully computer-based library and then in 1978, F.W. Lancaster wrote of the â€Å"paperless library† (Harter, 1996). Other terms later on that were used to describe the library of the future were â€Å"electronic library,† â€Å"virtual library,† â€Å"library without walls,† and â€Å"bionic library† (Harter, 1996). The term â€Å"digital library† came to be from the Digital Libraries Initiative. In 1994, six universities in the United States were granted 24 million dollars for digital library research. This was brought on by the sudden boom of the Internet. â€Å"Digital library† is the name that was most widely adopted by academics, researchers, and librarians and is used to describe the process of digitizing information resources. Digitization According to Ram Nath Maurya, there is a stress for three things in the digital world (Maurya, 2011): †¢ Awareness of information which gives the breath of vision. †¢ †¢ Awareness of technology which gives the power to make the visions manifest. Awareness of needs provide the insight to use professional skills and talents to greater effect. The meaning of this is that it is important for the user to have information that is easily locatable and easily accessible. Nowadays, library patrons have become information conscious and no longer want to wait and go to the library to find what they need. They want to access it access the information electronically. The process of digitizing implies the production of a digital surrogate for a physical object (Unsworth, 2004). There are many different items in libraries that are digitized. This ranges from serials that can be found either in print or online to rare books and archives that are now being preserved in a digital format. There are many positive outcomes that come from digitizing parts of a library’s collection. There is less cost in reshelving the item. The item is simultaneously available to multiple users. The item does not have to be replaced, since it is not being used and there is no chance of it being lost. Also, if it is a rare item, it can be preserved more easily as it will be less frequently handled. Why Go Digital? The most valuable aspect of the digital library is its reduction in cost. There are many fees that are negated such as staff pay, book maintenance, rent, and additional books. This makes the cost of the digital library much less to maintain than that of a traditional library. Also, increased reduction in the use of paper has a positive impact on the environment. Environmental scientists all over the world favor digital libraries to help reduce paper usage. Libraries that have a digital presence can be reached all over the world, thus allowing the library to have a far greater audience beyond their local community. Their collection can be shared and accessed from anywhere. As distance education becomes more popular, it is even more important for libraries to have an online presence. Students who might not be able to physically access their school library’s collection can instead find what they need on their website. Many libraries are offering a vary large amount of databases containing full-text journals and also electronic books. The interlibrary loan program allows students to order books from their own school and other libraries without leaving their home. The role of the librarian in a digital library is still vastly important. They are needed to package and repackage information. Librarians set up the proxies and open-URLs. They do electronic publishing, provide reference instruction, and teach patrons how to use electronic resources. Also, there is always more information that needs to be digitized. Disadvantages of Digital Libraries There are many threats to the nature of digital libraries. Computer viruses are a danger if libraries are not careful to protect their hardware and software. Also, the high initial cost of infrastructure such as the hardware, software, network, and IT professionals can be a downside and then also the cost to maintain and upgrade when needed. Standardization is another issue that libraries can face when moving to a digital format. Each library is different and therefore it is extremely likely that each library will have a very different way of digitizing their collection and presenting their information online. Another potential problem is copyright. Digital libraries have to find a way to properly distribute information without violating copyright law. The copyright of the author has to be protected as items are digitized and put into an online collection. Future of Digital Libraries The future trend for libraries is to keep moving toward a digital format. Daniel Akst, the author of The Webster Chronicle, sums it up in one simple statement: â€Å"the future of libraries-and of information-is digital. † He states, â€Å"All the problems associated with digital libraries are wrapped up in archiving,† and goes on to state, â€Å"if in 100 years people can still read your article, we’ll have solved the problem†. Computer storage continues to grow at an exponential rate and the cost keeps going down. It is believed that eventually, given the current advancement of technology, a person will be able to virtually access all recorded information. There are many large scale digitization projects underway at places such as Google, the Million Book Project, and Internet Archive. The technology behind scanners and the digitization techniques also keeps improving. There have been recent advancements in how books are handled and presentation technologies such as optical character recognition. Libraries will continue to become more equipped to digitize their own collections and share them with the world. Bibliography Akst, Daniel. The Webster Chronicle. New York: Bluehen, 2002. Print. Harter, S. (1996, September). What is a digital library? definitions, content, and issues. A paper presented at KOLISS DL ’96: international conference on digital libraries and information services for the 21st century, Seoul, Korea. Retrieved from http://php. indiana. edu/~harter/koreapaper. htm Lagoze, C. , Krafft, D. , Payette, S. , & Jesuroga, S. (2005). What is a digital library anymore, anyway?. D-Lib Magazine, 11(11), Retrieved from http://www. dlib. org/dlib/november05/lagoze/11lagoze. html Maurya, R. (2011). Digital library and digitization. International Journal of Information Dissemination & Technology. , 1(4), 228-331. Retrieved from http://web. ebscohost. com. proxy. lib. fsu. edu/ehost/pdfviewer/pdfviewer? vid=7&hid=9&sid=4c2e1991-9b6d-48bc-bd10-edec1ba6b69e@sessionmgr11 Unsworth, J. (2004, May 17). The value of digitization for libraries and humanities scholarship. Retrieved from http://people. lis. illinois. edu/~unsworth/newberry. 04. htmlÃ'Ž

Saturday, November 9, 2019

World War 1 †Was this an “unnecessary war”?

There are conflicting views on this topic as the subject is a complex one. However, it is true that it was a war that could have been avoided. It can be argued that WW1 was inevitable in the circumstances, but if we look at the very root cause of the war was limited and could have been controlled. It was a series of events that triggered a massive, global war. The organ of the conflict was disagreement between Austria-Hungary and Serbia’s on how to handle the assassination of Archduke Ferdinand. No other countries were involved in the matter connected with the assassination.Russia and Germany got involved simply because of their objectives to safeguard Austria-Hungary and Serbia, respectively. Other countries like Britain, France, and the Ottoman Empire had almost no interest in the matter. Thus, if Russia and Germany had kept out of the matter, in simple words, the war could have been avoided. The Series of Incidents The First World War was definitely an unnecessary conflict. What began as a local conflict over a political assassination soon turned into an unbelievable bloodbath.The minor conflict between Austria-Hungary and Serbia soon had Americans fighting in France, Indian troops fighting in Mesopotamia, and Australians fighting in Gallipoli. The fight was now mainly about Germany for the Allied Powers, and not Austria-Hungary. The prime cause of original conflict, Archduke Franz Ferdinand’s assassination, however,still remained unresolved. Those who were still fighting in 918, were doing so because halting the war could lead to unacceptable losses.With nationalism spreading its roots fast through southern and Eastern Europe, was the major cause of mutinies in the Austro-Hungarian army and led to their collapse. Soon, Germany was isolated and this ushered in the war’s end (Brussels journal. 2008). Opaqueness Of Diplomacy Although diplomacy plays a major role in preventing armed conflicts from happening, in case of World War I, we find it playing an opposite role, whether intentionally or not. Few of the warring nations in World War I were directly interested or least involved in the conflicts between Austria-Hungary and Serbia.They became involved only because of treaties forcing them to protect other countries. Many of these treaties were publicly known while some were secret. The unclear diplomacy objectives were perhaps the main factors that led to Germany making some aggressive moves. The Germans thought that Britain would never go into war against them. Russia has its own secret treaties and agreements to take care of. Italy is believed to do some research of its own to decide which side will offer higher potential benefits.Hence, it were these diplomatic maneuverings that soon accelerated the war, leading it to catastrophic levels. The potential enemies were unclear of the consequences of their actions. It was the tragedy of the diplomatic crisis that led the fighting in August 1914 to swell into the four-y ear tragedy. Perhaps, if potentialities of radio and telephone were exploited to the maximum to break the obstacles to communication, the destruction of the continent and a dialogue of the deaf could have been avoided (Keegan, 1999).Conclusion World War 1 was unnecessary as the train of events could have been intervened at any point, before the first clash of arms happened. Millions of lived were lost and tortured. There are some views that Was World War 1 inevitable that the Nationalists and the militarists had been playing war games for more than four decades now and the time was just ripe for a real war. Conflicting ambitions and real tensions played a major role in the war’s escalation, what with the naval arms race on between Britain and Germany.With German colonial ambitions intensifying and a destabilizing Europe make some observers and strategists feel that was inevitable. However, WW1 had nothing to do with the root cause of the conflict -assassination of the Archduk e. It seems the tensions were already high in Europe and the rest part of the world for a number of different reasons. The assassination just gave these countries an excuse to settle their own scores.

Thursday, November 7, 2019

Mr. Yogesh Vishwanath Chavan †Zero Energy Theory

Mr. Yogesh Vishwanath Chavan – Zero Energy Theory Free Online Research Papers Abstract This zero energy theory has great advantage over all other theories by its inner consistency due to which it can be used at microscopic as well as macroscopic level also. By using this theory, the attempt has been made in order to solve physics some mysterious unsolved problems like, dark matter, definition of space, expansion of the Universe and Gamma ray burst, origin of gravity, loss of mass and jets of black hole, curvature of space-time, equivalence between inertia and gravitational mass, difference in mass of proton and electron even having same value of charge, meaning of antimatter, the Universe with no anti matter etc. Thus, this theory can be used for unification of fundamental forces without need of any string i.e. no need of 11 dimensions, parallel universes, virtual particles like graviton etc. Keywords: Zero Energy, Uncertainty Principle, Gravity, Anti Neutrino, Anti matter. Introduction The energy of a body is its capacity for doing work. Almost all physics law is based upon this term – Energy. In fact, the basic physics law is â€Å"The total amount of energy in the Universe always remains constant and energy neither be created nor destroyed, but can be converted from one form to another form.† We all had accepted this law and are using over hundreds of years without thinking on following points. If energy neither be created nor destroyed, then what is the origin of all the galaxies (matter) in the Universe? In other words, Universe itself violates the above statement. If assume that, some fixed amount of matter was already available in the Universe which fly away apart in terms of galaxies during big bang explosion, then why only that fixed amount of matter was available to the Universe or who supply these fixed amount of matter to the Universe (God?) and why not supplying again? Thus, the statement of â€Å"total amount of energy in the Universe always remains constant† produces only unanswered mysterious questions. The available energy in the Universe is in the form of matter or radiations which are in terms of positive mass. If energy neither be created nor destroyed, then the total amount of energy in the Universe must be zero. Then, where is the negative energy? The only one way of escape from such mysterious questions is to modify above basic law as follow â€Å"The Universe must be continuously creating energy at every point such that the total amount of energy in the Universe should always remain equal to zero†. Thus from above statement, the energy can be created or destroyed in pairs of equal amount of positive and negative energy and for that there is no need of any supplier like God. The Universe is created from zero energy and is still in existence with summation of energy equal to zero. Or, The Universe is a Big Zero. But if it is created from zero energy, then which thing produces pairs of positive and negative energy continuously at every point. The answer is â€Å"Uncertainty Principle† which states that â€Å"Empty space cannot be exactly zero†; hence there should be formation of pairs of opposite sign of energy without violating energy balance as well as uncertainty principle. In other words, â€Å"Unc ertainty principle, itself is the God or Creator of the Universe.† But, if all galaxies are made from positive energy, where is the negative energy? Macroscopic Level (Gravity) Gravitational Constant (G) and Space as Negative Energy The gravitational force between two objects is written as: F = (G*M1*M2)/R2 (1) Where, M1 and M2 are the masses of the two objects, R is the distance between their centers and G is Gravitational Constant = 6.67*10-11 m3/ (Sec2*Kg). This equation is known as Law of Universal Gravitation. In above equation, M1, M2 and R2 are positive values. But, the gravitational force is always attractive force; hence R.H. side of above equation must be negative to represent attractive force. Therefore, gravitational constant (G) must be negative in sign. For e.g. in case of Coulomb’s electrostatic force law, the force between two charged particle is attractive because of multiplication of opposite sign of charge on the particles making negative sign for attractive electrostatic force. While in case of repulsive force, there is multiplication of same sign of charge on the particles making positive value for repulsive electrostatic force. As per law of gravitation, the sign of G must be negative; and dimensional analysis of constant ‘G’ shows that either (Kg/m3) or (Sec2) must be negative. But, Sec2 is a positive value, hence sign of (Kg/m3) must be negative. The gravitational force is an action at a distance force similar to electromagnetic force. In electromagnetic force (action at a distance force), ?0 (Permittivity of free space) and ?0 (Permeability of free space) are constants representing one of the property of space. Hence, G as a constant must have to represent the property of the space. In other words, the above dimensional analysis of ‘G’ shows that property of space representing (Kg/m3) must be negative and that property of space is nothing but critical density of vacuum (space). Or, Critical Density of Vacuum (Space) must be Negative i.e. â€Å"Empty Space must be filled of negative energy particles†. Or what we are saying about â€Å"Dark Matter/ Dark Energy is actually Negative Energy Particles†. Cosmic Microwave Background (CMB) and Mass of Neutrino In 1965, Penzias and Wilson announced the discovery of the Cosmic Microwave Background. They observed an excess flux at 7.35 cm wavelength equivalent to the radiation from a blackbody with a temperature of 2.725 degrees Kelvin. The temperature of the CMB is almost the same all over the sky. The effect of CMB is not because of adiabatic cooling of photons due to expansion of Universe from the big bang as per big bang theory, but it is continuous ongoing effect due to uncertainty principle. These CMB radiations or positive energy particles are opposite pair of dark energy in the Universe. Following simple calculation will show that mass of these CMB radiations is closer to the mass of neutrino, M (Approx. 10-39 Kg). From Wien’s Law, a blackbody (the Universe) at temperature T has a maximum wavelength of ?max (m)=0.0029/T. Using this value in Planck’s relation for the CMB radiations, we find E=M*C2=h*f=(h*C)/?max (2) M*C2=(h*C*T)/0.0029 Solving for the Mass (M), we have M=(h*T)/(C*0.0029) =(6.62*10-34*2.725)/(3*108*0.0029) =2.0754 * 10-39 Kg The continuous creation of pairs of energy of opposite sign, due to uncertainty principle solves the horizon problem (the extreme uniformity of CMB radiations in different regions of space). Also, it solves the flatness problem. Flatness problem means the density of the universe is pretty close to the critical density. Zero Energy and No Boundary Condition of the Universe As per thermodynamics, zero energy means no volume condition i.e. no boundary condition. From above theory, the Universe must be started from zero energy or no boundary condition and is still expanding in zero energy or no boundary condition by converting zero energy into pair of positive and negative energy particles due to uncertainty principle. Or, The Universe is expanding in No Boundary Condition. In other words, the Universe not started with big bang explosion, but the matters in the Universe were created with continuous increase in the size of the Universe. The Universe is expanding at the speed equal to the speed of light, from Hubble’s Law. After every one second, the increase in the radius of the Universe must be equal to 3*108 m, or, dR/dt=C=3*108 m/sec. The density of the Universe always remains constant equal to the critical density of vacuum. Let us calculate the matter in the Universe after one second from the birth of the Universe due to uncertainty principle. R=Radius of the Universe after 1 Second=3*108 m, Therefore, M=Matter in the Universe after 1 Second=Rho(Critical)*{4*Pi*R3/3} {Where, Rho(Critical)=Critical Density of Vacuum=7.94*10-27 Kg/m3 for H=Hubble’s constant=65 Km/Sec/Mpc}. = 7.94*10-27*4*(22/7)*(3*108)3/3 =0.9 Kg From above, during early stage, the Universe was not as hot as per big bang theory; but, it was too cold. The gravity effect was very negligible to halt the expansion of the Universe. Hence, the Universe expanded with no barrier for its expansion. With further expansion of the Universe, the gravity effect becomes considerable due to tremendous increase in the matter. From above formula, it can be shown that just after 1000 years; the matter in the Universe was around 2.8* 1031 Kg or 14 times the mass of the sun. It was spread around the Universe everywhere in the radius of approx. 9.48*1018 m (1000*3.16*107*3*108 m) instead of collecting at single point; it was segregated making clusters of matter which finally converts into galaxies. Thus, this theory also solves the structure problem i.e. the existence of large scale structure, like walls of cluster of galaxies and voids. According to Hubble’s law, the age of the Universe is equal to 1/H which is 15 Billion years old. Hence, the size of the Universe (R), is- R=C*t=3*108*15*109*3.16*107=0.142*1027 m Where, 1Year=3.16*107 Sec. The total matter (M) in the Universe after 15 billion years is, M=7.94*10-27*4*(22/7)*(0.142*1027)3/3 M=9.52*1052 Kg (4) Same amount of dark matter exists in the form of space, so that total energy in the Universe becomes equal to zero. The above calculated value of total mass in the Universe is quiet acceptable due to existence of more than billion of galaxies in the Universe and more than billion of stars in a galaxy. Let us calculate how much amount of matter; the Universe must be creating per second at this moment. dV=Increase in Volume of the Universe per second=4*(22/7)*{(R2)3-(R1)3}/3. Where, R1=0.14*1027m and R2-R1=3*108 m ? dV=4*(22/7)*(R2-R1)*{(R2)2+R2*R1+(R1)2}/3=4*(22/7)*3*108*3*(0.142*1027)2/3 R2?R1 =76*1060 m3 ? Increase in Matter of the Universe per Second=7.94*10-27*76*1060 = 0.60*1036 Kg (5) From above it is seen that, really, â€Å"Size does Matter†; i.e., larger the size of the Universe, greater is the creation of matter per second. This large amount of matter (in terms of radiations) at the edge of the Universe may be the source of Gamma rays burst which occurs sixteen times a month and to be roughly uniformly distributed in all direction across the sky. Disappearance of the Sun, Zero Energy and Gravity What will happen, if the sun disappears suddenly? According to Newton’s Classical Theory, the planets will start to move away from the sun instantaneously due to absence of gravitational force. While according to A. Einstein’s General Theory of Relativity, there is a universal limit equal to speed of light for transformation of information from one point to another point. Hence, the planets will not start to move away from the sun at same time, but the disappearance of the sun will produce the gravitational waves (Space-time waves) travelling at the speed of light and when these waves will reach towards the planets, they will start to move away from the sun. But, both these theories are on wrong track. Because disappearance of the sun means all energy of the sun vanishes suddenly which will create zero energy density region having volume equal to the volume of the sun. Now, as proved earlier, space is nothing but negative energy; hence the energy density around the su n is negative. Therefore, this zero energy density will exert force on negative energy particles having direction away from the center of the sun. We know that, Force=Mass*Acceleration. But, here, the mass is negative; hence these negative energy particles must have to accelerate towards the center of the sun (-Ve Acceleration). When this effect will reach towards the planets with speed of the light as per general theory of the relativity, the planets will experience attractive force proportional to their mass because of acceleration of space which is nothing but the gravity. The overall effect will be the movement of the planets towards the center of the sun instead of moving away from the sun. This acceleration must be more than the gravitational acceleration produced due to the presence of the sun, because gravitational acceleration produced by the sun is only due to annihilation of the positive mass of the sun with negative mass of the space. From above the gravity can be define d as below: â€Å"Gravitational force is an action at a distance force which is due to creation of zero energy density region around the positive mass because of annihilation of this positive mass with negative mass of the space.† From above analysis, two points come ahead viz. – The source object in order to produce gravitational effect on the other objects must have to lose its mass continuously by making annihilation with negative mass of the space. The object which experiences the gravitational force does not receive any amount of energy from the source object. Hence, for making displacement against the gravitational pull, it must have to convert its available energy into kinetic energy equivalent to the gravitational force multiplied by the displacement made by the object. For e.g., the ball releases from certain height, on the earth, should have to convert its available rest energy into kinetic energy because the ball does not receive any type of energy from the earth. Hence, the rest energy of the ball is not always remains constant, but should have to decrease by converting into kinetic energy. Let us derive the new energy balance equation for both the cases i.e. free fall of the object under gravity and the object thrown in upward direction against gravity of the earth between points A and B (RA>RB). Case 1: In case of a body, falling freely under gravity, there is a continuous conversion of its rest energy into kinetic energy such that at any point in its path, the total energy of the body always remains constant equals to the rest energy of the body. Thus, for freely falling body, (Rest Energy)A=(Reduced Rest Energy)B+(K.E.)B (6) From above formula, let us calculate, the velocity V, at which all the rest energy of the ball converts into the kinetic energy. Therefore, (Reduced Rest Energy)B become equals to zero i.e. (Rest Energy)A = (K.E.)B. From special theory of relativity, (Rest Energy)A=(M0*C2)=(Kinetic Energy)B={M0*C2*[(1/sqrt(1-(V2/C2)))-1]} ? V={?3/ 2}*C= 0.866*C = 2.598*108 m/s (7) As per special theory of relativity, the particle like photon should have only kinetic energy and no rest energy because they are travelling at the speed of light. Hence, at the above derived velocity, the entire kinetic energy of the ball should have to convert into photons where energy of each photon depends upon the temperature of the ball at this velocity, V. Suppose any charged particle (say electron, -Ve charge) falls freely under the influence of very strong gravitational force, then, the entire rest energy of the electron must have to convert into kinetic energy at above derived velocity. But, the electron will never converts into photons because of law of conservation of charge. The charge on the particle prevents conversion of its available energy into photons or in terms of radiations. Charge helps to conserve available energy. If rest energy of particle does not convert into K.E., then charge on particle must have to conserve this rest energy or prevent to convert this re st energy into photons. In other words, Charge is responsible for Rest Energy or Conservation of Energy in Charged Particles. Hence, existence of matter (Galaxies, stars, planets and finally living cells like human being) is due to charge without which entire universe must be filled of only radiations. Any neutral particle if carry some amount of energy, all its stored energy should have to convert into K.E. as there is no such a thing like charge which will prevent conversion of this stored energy into the form of K.E. and then into the form of photons. Neutral particle should have to carry only Kinetic Energy. Therefore, free neutrons are not stable, because neutrons do not carry any charge, hence all its stored energy must have to convert into Kinetic Energy, while in nucleus of atom, the charge on proton prevents neutron to convert its available rest energy into kinetic energy. Case 2: Consider a ball thrown in upward direction against gravitational field of the earth from point B to point A. As per classical theory, for throwing a ball in upward direction, we have to supply kinetic energy to the ball, which then converts into potential energy, when ball after achieving certain height comes to rest. First of all, the concept of supplying kinetic energy to the ball looks like strange idea, because there is only one way of supplying kinetic energy to any system i.e. nothing but photons. Here, you are supplying only kinetic energy and no rest energy to the ball and for achieving this, you have to transfer photons from your body through hand to the ball without any loss of photons to the surrounding. Is it possible? No. In case of inertia also, when we apply the force on an object (Inertial Mass) in order to move it with certain velocity, we are not transferring any type of kinetic energy to the object by application of force. In all these cases (inertia mass as well a s gravitational mass) the object converts some amount of its available rest energy into the kinetic energy which depends upon the value of force. Thus, this theory also achieves â€Å"Equivalence between Gravitational Mass and Inertial Mass† from the point of view that â€Å"Due to Application of Force in both cases, the object converts some amount of its rest energy into kinetic energy which depends upon the value of the force†. Therefore, (Rest Energy)A=(Reduced Rest Energy)B+(K.E.)B (8) The equations (6) and (8) both are same which shows consistency in this theory. The theory also eliminates the need of potential energy which is required in case of classical theory, where to determine potential energy at any point some datum reference point is required like surface of the earth. Why experiment have not been conducted to know the difference of rest mass (or rest energy) at point A and at point B, when the object will be thrown in upward direction. If experiment is carried out, you should get same amount of rest mass at both points as per this theory; while as per classical theory, the object at point A should have rest mass when it is at rest plus the mass equivalent to supplied kinetic energy at point B. Casimir Effect and Zero Energy It is a small attractive force that acts between two close parallel uncharged conducting plates. Its existence was first predicted by the Dutch physicist Hendrick Casimir in 1948 and confirmed experimentally by Steven Lamoreaux, in 1996. According to modern physics, one can interpret these so called vacuum fluctuations, as pairs of particles and anti particles that suddenly appear together, move apart, and then come back together again, and annihilate each other. But, it is not true. In fact, this effect can be considered as gravitational force between these plates. The negative energy particles (space) between these plates annihilates with positive mass of these plates creating zero energy density between them. While surrounding energy density remains same due to lot of negative energy particles around the plates. This zero energy density is nothing but gravitational pull as proved earlier. Hence, these plates attract towards each other. Therefore, as per modern physics, this effect cannot be linked with the possibility of faster than light (FLT). Loss of Mass of Black Holes and Jets of Black Holes From new definition of gravity, higher the mass of the object, not only there will be more destroy of space (negative energy) giving rise to strong gravitational field. There is same amount of destroy of mass of the object. In other words, for massive objects like black hole, due to above gravity effect, the continuous loss in mass of the black hole should take place. Also, for any object falling freely in strong gravitational field of the black hole, it’s all rest energy must be converted into kinetic energy by gaining velocity, V equals to {(?3/ 2)*C}. The temperature of the object at this velocity must be very high; hence its kinetic energy should have to convert into high energy photons like Gamma rays or X rays making jets around the black holes. Gravitational Red Shift of the Spectral Lines and Loss of Energy It is observed that the wavelength of sodium light coming from the sun (strong gravitational field) is greater than that from the sodium lamp on the earth which gives slight displacement in spectrum of light towards the red end called as gravitational red shift of spectral lines. It is considered as the effect of time dilation (a clock lying in a gravitational field appears to run slow) given in general theory of relativity. But, the time dilation should occur only in strong gravitational field of the sun and is very less in case of gravity of the earth. Therefore, even if it is consider that red shift is due to time dilation on the sun, the light after reaching on the earth from the sun, the red shift in spectral line should have to disappear due to very less time dilation on the earth. Thus, gravitational red shift is not due to effect of time dilation. The photon due to its mass (in terms of kinetic energy) experiences force due to gravity effect produced by the sun. In order to m ove away from the surface of the sun, the photon has to do work against gravity of the sun. Therefore, the energy balance equation can be written as: K.E. of photon at surface of the sun=K.E. of photon at surface of the earth+Work done by photon. This work done must be converted into less energy photons (different wavelength than original photons) which make loss in energy of the main photon. Hence the photon which receives at surface of the earth has energy less than that produced on the sun by amount equal to work done by photon. Therefore, we observed the gravitational red shift of the spectral lines. Curvature of Space-Time and Bending of Light According to Einstein’s General theory of Relativity, a body of high mass bends the space-time continuum around itself. The continuum becomes curved. In May 1919, during total solar eclipse, the deviations in the position of the stars which appears to be lying in the neighborhood of the sun disc were observed in gravitational field of the sun due to the deflection or bending of the rays of light coming from stars. Hence, confirming this prediction of the General theory of Relativity. In fact, there is no such a thing happening like curvature of space-time continuum in strong gravitational field. There is nothing like singularity in case of black holes or concepts like worm holes, the terms appears due to curvature of space-time continuum. The correct reason for the bending of light in strong gravitational field can be explained as below: The photon contains energy in the form of kinetic energy, (K.E.=m*C2); hence, the energy or mass of the photon experiences strong gravitational force. We know from Newton’s first law of motion, force is that external agent which changes the state of rest or of uniform motion of a body along a straight line. By applying a force we can change (1) the magnitude or (2) the direction or (3) both magnitude and direction of the velocity of the body. In case of photon, it is travelling at the speed of light; hence, it is not possible to reduce its velocity as per special theory of relativity. Therefore, the effect of gravitational force is to change the direction of photon from rectilinear motion. The following simple calculation will show that the bending of light do not depends upon energy of the light, but entirely depends upon its distance from the sun and mass of the sun. Thus, for all type of photons from higher energy to lower energy photons, the bending effect remain same whic h is considered by Einstein as curvature of space-time. Consider a ray of light which is travelling in a rectilinear direction at a distance of 109 m from center of the sun and parallel to the surface of the sun. Then, as per second kinematical equation and from Newton’s law of gravitation, S=Displacement of photon towards the sun=(u*t+0.5*g*t2), where, u=Initial velocity of the photon in the direction of displacement=0; g= Gravitational strength of the sun or acceleration of photon=-(G*M)/R2, G=Gravitational Constant=6.67*10-11 m3/(Kg*Sec2), M=Mass of the sun=2*1030 Kg, and R=Distance of photon from the sun=109 m. Let us calculate this displacement of the photon for time of travel equal to 1 Sec. Note that, the above eq. is independent of energy or matter of photon. S={0*1-[(0.5*6.67*10-11*2*1030*12) / (109)2]}=-66.7 m Figure 1: Bending of light The Projectile motion of light under strong gravity From above figure.1, for one second of travel of photon or for travel of photon at a distance equal to 3*108 m, the photon is displaced towards the sun about 66.7 m. Thus, the effect of bending of light in strong gravitational field of the sun is not due to any curvature of space-time, but it is the case of the projectile motion under the influence of gravity. It is similar to the parabolic path made by the ball on the earth which is thrown in upward direction at an angle to the surface of the earth. Therefore, there is nothing like concept of curvature of space-time in case of defining gravity. Microscopic Level (Matter and Anti-Matter) The Difference Between Mass of Electron and Proton In case of electrostatic force, we can use same equation (6) or (8) from point of view of application of force (i.e. no supply of kinetic energy from external medium). Here, the kinetic energy of the electron will be integral of electrostatic energy between radius during the origin of the electron (i.e. from the neutron) and the orbital radius of the electron around nucleus of atom (For H2 atom, the orbital radius of the electron at ground state is 5.29*10-11 m). The equation can be written as below: (Reduced Rest Energy)=(Rest Energy)-(Kinetic Energy) (9) But, Kinetic Energy = ? (Fe*dR) = ? {q2/ (4*?*?0*R2)}*dR = ? {e/ R2}*dR e= {q2/ (4*?*?_0)} ? Kinetic Energy = {(e/R1)-(e/R2)} = {(e*(R2-R1))/(R2*R1)} Where, R1=Radius during origin of the electron and R2= Orbital radius of the electron in H2 atom If the R.H. side of the above equation (Reduced Rest Energy) becomes equal to zero, then we can calculate the radius during origin or birth of the electron. By rearranging terms, we get R1 = {(e*R2)/((R2*M0*C2)+e)} = {(2.567*9*10-29*5.29*10-11)/((5.29*10-11*9.1095*10-31*9*1016)+(2.567*9*10-29))} ? R1 = 2.8178*10-15 m (10) Where, M0=Rest mass of the electron=9.1095*10-31 Kg This is closer to the radius of the neutron which shows that the birth of electron occurs at periphery of neutron. The above calculation shows that â€Å"Rest Mass of Electron i.e. 9.1095*10-31 Kg is due to Birth of Electron takes place at Periphery of Neutron i.e. apprx. at 2.82*10-15 m†. Note that the value of R1