Единое окно доступа к образовательным ресурсам

Теория и практика перевода: Учебное пособие

Голосов: 28

Данная работа представляет собой курс лекций и практических заданий по теории и практике перевода с английского языка на русский. В работе охарактеризованы основные приемы перевода и рассмотрены конкретные примеры перевода текстов по различным направлениям науки и техники. Учебное пособие предназначено для студентов, обучающихся по программам высшего профессионального образования, по специальности "Информатика" с дополнительной квалификацией "Переводчик в сфере профессиональной коммуникации". Упражнения направлены на развитие умений и навыков перевода научно-популярной литературы. Пособие может быть рекомендовано студентам других специальностей, изучающих перевод.

Приведенный ниже текст получен путем автоматического извлечения из оригинального PDF-документа и предназначен для предварительного просмотра.
Изображения (картинки, формулы, графики) отсутствуют.
               In the introduction to this book I described hacking as a sport; and like most
sports, it is both relatively pointless and filled with rules, written or otherwise, which
have to be obeyed if there is to be any meaningfulness to it. Just as rugby football is
not only about forcing a ball down one end of a field, so hacking is not just about
using any means to secure access to a computer.
           On this basis, opening private correspondence to secure a password on a
public access service like Prestel and then running around the system building up
someone's bill is not what hackers call hacking. The critical element must be the use
of skill in some shape or form.
           Hacking is not a new pursuit. It started in the early 1960s when the first
"serious" time-share computers began to appear at university sites. Very early on,
'unofficial' areas of the memory started to appear, first as mere notice boards and
scratch pads for private programming experiments, then, as locations for
games.(Where, and how do you think the early Space Invaders, Lunar Landers and
Adventure Games were created?) Perhaps tech-hacking—the mischievous
manipulation of technology--goes back even further. One of the old favorites of US
campus life was to rewire the control panels of elevators (lifts) in high-rise buildings,
so that a request for the third floor resulted in the occupants being whizzed to the
twenty-third.
           Towards the end of the 60s, when the first experimental networks arrived on
the scene (particularly when the legendary ARPAnet--Advanced Research Projects
Agency network-- opened up), the computer hackers skipped out of their own local
computers, along the packet-switched high grade communications lines, and into the
other machines on the net. But all these hackers were privileged individuals. They
were at a university or research resource, and they were able to borrow terminals to
work with.
           What has changed now, of course, is the wide availability of home
computers and the modems to go with them, the growth of public-access networking
of computers, and the enormous quantity and variety of computers that can be
accessed.
         Hackers vary considerably in their native computer skills; a basic knowledge
of how data is held on computers and can be transferred from one to another is
essential. Determination, alertness, opportunism, the ability to analyze and synthesize
the collection of relevant helpful data and luck--the pre-requisites of any intelligence
officer--are all equally important. If you can write quick effective programs in either
a high level language or machine code, well, it helps. Knowledge of on-line query
procedures is helpful, and the ability to work in one or more popular mainframe and
mini operating systems could put you in the big league.
         The materials and information you need to hack are all around you--only they
are seldom marked as such. Remember that a large proportion of what is passed off
as 'secret intelligence' is openly available, if only you know where to look and how to
appreciate what you find. At one time or another, hacking will test everything you
know about computers and communications. You will discover your abilities increase
in fits and starts, and you must be prepared for long periods when nothing new
appears to happen.
                                                                                        81


        Popular films and TV series have built up a mythology of what hackers can
do and with what degree of ease. My personal delight in such Dream Factory output
is in compiling a list of all the mistakes in each episode. Anyone who has ever tried
to move a graphics game from one micro to an almost-similar competitor will already
know that the chances of getting a home micro to display the North Atlantic Strategic
Situation as it would be viewed from the President's Command Post would be slim
even if appropriate telephone numbers and passwords were available. Less
immediately obvious is the fact that most home micros talk to the outside world
through limited but convenient asynchronous protocols, effectively denying direct
access to the mainframe products of the world's undisputed leading computer
manufacturer, which favors synchronous protocols. And home micro displays are
memory-mapped, not vector-traced... Nevertheless, it is astonishingly easy to get
remarkable results. And thanks to the protocol transformation facilities of Pads in
PSS networks (of which much more lately), you can get into large IBM devices....
        The cheapest hacking kit I have ever used consisted of a ZX81, 16K RAM
pack, a clever firmware accessory and an acoustic coupler. Total cost, just over 100.
The ZX81's touch-membrane keyboard was one liability; another was the uncertainty
of the various connectors. Much of the cleverness of the firmware was devoted to
overcoming the native drawbacks of the ZX81's inner configuration--the fact that it
didn't readily send and receive characters in the industry-standard ASCII code, and
that the output port was designed more for instant access to the Z80's main logic
rather than to use industry-standard serial port protocols and to rectify the limited
screen display.
        Yet this kit was capable of adjusting to most bulletin boards; could get into
most dial-up 300/300 asynchronous ports, re-configuring for word-length and parity
if needed; could have accessed a PSS PAD and hence got into a huge range of
computers not normally available to micro-owners; and, with another modem, could
have got into view data services. You could print out pages on the ZX 'tin-foil'
printer. The disadvantages of this kit were all in convenience, not in facilities.
Chapter 3 describes the sort of kit most hackers use.
        It is even possible to hack with no equipment at all. All major banks now
have a network of 'hole in the wall' cash machines—ATMs or Automatic Telling
Machines, as they are officially known. Major building societies have their own
network. These machines have had faults in software design, and the hackers who
played around with them used no more equipment than their fingers and brains. More
about this later.
        Though I have no intention of writing at length about hacking etiquette, it is
worth one paragraph: lovers of fresh-air walks obey the Country Code; they close
gates behind them, and avoid damage to crops and livestock. Something very similar
ought to guide your rambles into other people's computers: don't manipulate files
unless you are sure a back-up exists; don't crash operating systems; don't lock
legitimate users out from access; watch who you give information to; if you really
discover something confidential, keep it to yourself. Hackers should not be interested
in fraud. Finally, just as any rambler who ventured past barbed wire and notices


warning about the Official Secrets Acts would deserve whatever happened thereafter,
there are a few hacking projects which should never be attempted.
         On the converse side, I and many hackers I know are convinced of one thing:
we receive more than a little help from the system managers of the computers we
attack. In the case of computers owned by universities and polys, there is little doubt
that a number of them are viewed like academic libraries--strictly speaking they are
for the student population, but if an outsider seriously thirsty for knowledge shows
up, they aren't turned away. As for other computers, a number of s are almost sure
we have been used as a cheap means to test a system's defenses...someone releases a
phone number and low-level password to hackers (there are plenty of ways) and
watches what happens over the next few weeks while the computer files themselves
are empty of sensitive data. Then, when the results have been noted, the phone
numbers and passwords are changed, the security improved etc etc....much easier on
dp budgets than employing programmers at Ј150/man/ day or more. Certainly the
Pentagon has been known to form 'Tiger Units' of US Army computer specialists to
pin-point weaknesses in systems security.
        Two spectacular hacks of recent years have captured the public imagination:
the first, the Great Prince Philip Prestel Hack, is described in detail in chapter 8,
which deals with view data. The second was spectacular because it was carried out on
live national television. It occurred on October 2nd 1983 during a follow-up to the
BBC's successful Computer Literacy series. It's worth reporting here, because it
neatly illustrates the essence of hacking as a sport...skill with systems, careful
research, maximum impact with minimum real harm, and humour.
        The TV presenter, John Coll, was trying to show off the Telecom Gold
electronic mail service. Coll had hitherto never liked long passwords and, in the
context of the tight timing and pressures of live TV, a two letter password seemed a
good idea at the time. On Telecom Gold, it is only the password that is truly
confidential; system and account numbers, as well as phone numbers to log on to the
system, are easily obtainable. The BBC's account number, extensively publicized,
was OWL001, the owl being the 'logo' for the TV series as well as the BBC
computer.
        The hacker, who appeared on a subsequent programs as a 'former hacker' and
who talked about his activities in general, but did not openly acknowledge his
responsibility for the BBC act, managed to seize control of Coll's mailbox and
superimpose a message of his own:
        Computer Security Error. Illegal access. I hope your television
PROGRAMME runs as smoothly as my PROGRAM worked out your passwords!
        Nothing is secure! /41/

      1.2.5 Текст “Cloning”

        Clone, an organism, or group of organisms, derived from another organism by
an asexual (nonsexual) reproductive process. The word clone has been applied to
cells as well as to organisms, so a group of cells stemming from a single cell is also
called a clone. Usually the members of a clone are identical in their inherited
                                                                                     83


characteristics—that is, in their genes —except for any differences caused by
mutation. Identical twins, for example, who originate from the division of a single
fertilized egg, are members of a clone; whereas nonidentical twins, derived from two
separate fertilized eggs, are not clones. Besides the organisms known as prokaryotes
(the bacteria and cyan bacteria), a number of other simple organisms, such as most
protozoans, many other algae, and some yeasts, also reproduce primarily by cloning,
as do certain higher organisms like the dandelion or aspen tree.
         Through recent advances in genetic engineering, scientists can isolate an
individual gene (or group of genes) from one organism and grow it in another
organism belonging to a different species.
          The species chosen as a recipient is usually one that can reproduce asexually,
such as a bacterium or yeast. Thus it is able to produce a clone of organisms, or cells,
that all contain the same foreign gene or genes. Because bacteria, yeasts, and other
cultured cells multiply rapidly, these methods make possible the production of many
copies of a particular gene. The copies can then be isolated and used for study (for
example, to investigate the chemical nature and structure of the gene) or for medical
and commercial purposes (for example, to make large quantities of a useful gene
product such as insulin, interferon, and growth hormone). This technique is called
cloning because it uses clones of organisms or cells. It has great economic and
medical potential and is the subject of active research.
         Identical-twin animals may be produced by cloning as well. An embryo in the
early stage of development is removed from the uterus and split, and then each
separate part is placed in a surrogate uterus. Mammals such as mice and sheep have
been produced by this method, which is generally called embryo splitting.
         Another development has been the discovery that a whole nucleus, containing
an entire set of chromosomes, can be taken from a cell and injected into a fertilized
egg whose own nucleus has been removed. The division of the egg brings about the
division of the nucleus, and the descendant nuclei can, in turn, be injected into eggs.
After several such transfers, the nuclei may be capable of directing the development
of the eggs into complete new organisms genetically identical to the organism from
which the original nucleus was taken. This cloning technique is in theory capable of
producing large numbers of genetically identical individuals. Experiments using this
technique have been successfully carried out with frogs and mice.
         Continue article...
         Progress in cloning higher mammals beyond an early embryonic stage
presents a much more formidable challenge. Genes in cells at the earliest stages of
embryonic life carry the encoded knowledge that enables cells to develop into any
part of the body. But skeptics theorized that once cells form into specific body
components, they thereafter lose the capability to reconstruct the entire organism
from the genetic contents of the nucleus.
         However, in July 1996, a team of Scottish scientists produced the first live
birth of a healthy sheep cloned from an adult mammal. The team scraped skin cells
from the udder of a donor sheep (sheep A) and these cells were temporarily starved of
nutrients to halt cell development. An unfertilized egg was removed from a second
sheep (sheep B) and its nuclear material was removed to eliminate genetic


characteristics of the donor egg. A skin cell from sheep A (containing a nucleus with
genetic material) was fused with the unfertilized egg from sheep B. The egg, now
with a full complement of genes, began dividing and was placed into the uterus of a
surrogate mother (sheep C). The embryo developed normally and was delivered
safely. Named Dolly, this healthy sheep was introduced to the world with much
fanfare in February 1997.
         While Dolly has most of the genetic characteristics of sheep A, she is not a
true clone. Not all of an animal's genes are found in the cell's nucleus. There are a
few dozen genes that reside in the mitochondria outside the nucleus in the cell's
cytoplasm. In Dolly's case, some of these genes were supplied by the donor egg of
sheep B.
         The creation of Dolly represents a unique advance for cloning technology, but
it inevitably intensified the debate about subjecting humans to cloning. Rather than a
prelude to human cloning, however, many scientists herald the achievement as the
forerunner of a revolution in animal breeding that will allow the highest quality farm
animals to be produced and will provide a cost-effective method of producing
medicines for human use. Cloning may also be used to create genetically altered
animals capable of providing major organs for surgical transplantation into human
beings.

     2.2.6 Текст “The Future of Global Communications: We have seen the
Future and It is Wireless”

         It’s another work and you’re on the 7:05 train whisking you at 190 miles an
hour into the big city. Your laptop displays the morning news, which is being beamed
directly from the wire services.
         Suddenly, you hear a beep coming from your wrist pager. The verbal mode
kicks in and you hear an electronically synthesized voice telling you to send the facts
concerning this morning’s new business proposal. From your pocket you pull out
your personal cellular telephone and say, “Call my boss.” Automatically, it dials his
personal communicator. You tell him that the requested data will be immediately
faxed. Then you plug your cellular phone into your lap computer, your boss is
reading the facts.
         A few minutes later, another message from your boss beeps in, thanking you
for the information and asking you to meet him downtown at the Express port.
         It was published more then ten years ago and sounded like a page out of the
future? Maybe so, but what may sound like tomorrow’s technology is here today.
Right now, we’re in the midst of a communication revolution. In addition, the
revolution is wireless.
         The freedom that a wireless system of communication affords will have a
limitless affect on every aspect of one’s life. The wires that tied people to one
location ever since Alexander Graham Bell invented the telephone have been cut by
advanced technology. They are being replaced by high frequency radio technology
and ultra-sophisticated phone switching devices. Combine that with custom designed

                                                                                     85


integrated circuits and you have marvels as if voice activated calling and voiced
synthesized message capabilities.
        In the not-too-distant future, the phones in your office and home may be wire-
free. Moreover, sometimes they are now. With sound quality, that rivals wired
quality. However, wireless voice transmission is just the beginning. Technological
advances are making it possible to transmit data as well.
        In fact, it will soon be as common to connect computers by ultra-high
frequency, distortion-free radio transmitters as it is with wires that run through walls.
Even portable computers, like the kind you take on trains, are now in constant contact
to their database. When someone needs to access the mainframe, they simply plug
their computer into their cellular phone. What’s more, the advent of digital
technology will ensure error-free data transmission.
        Even more astounding, the effects of the wireless revolution will soon be
global. Companies like Motorola had on the drawing board plans to launch 77 low
Earth orbit satellites that essentially would allow anyone with a cellular phone to
communicate with anyone else on EARTH simply by dialing their personal telephone
number. And they did. One person, one number. A staggering achievement.
        Overall, it’s obvious that the future of personal communication has no wires
attached. The freedom it has brought should allow for unheard-of opportunities for
increased productivity and personal enrichment.
        And for those who feel that being in constant contact with the world around
you is a little too much like 2001, remember this. You can always turn it off.

      2.2.7 Текст “Careers”

        Twenty-five years ago, armed with a degree in accounting, I joined my
current employer in an entry-level position. These past 25 years have been good to
me. I've steadily risen in responsibility and title and currently manage a department of
45 people. But I'm thinking of leaving. After all these years with a large corporation,
I'm wondering whether working for a smaller company might not provide greater
rewards, both psychologically and financially. I have a few friends who left jobs with
big companies to join smaller firms, in one case going from a company generating
billions of dollars a year to a six-person startup company. He seems happy enough,
but his only complaint is that he lacks the staff and resources he once enjoyed at his
previous employer: Any thoughts on the rewards versus the risks of going from big to
small?
        Find your niche
        You pose two different questions. The decision whether to stay where you are
or to seek another job has more to do with your personal situation than deciding
whether you'd be happier with a smaller company. I'II focus on the big-vs.-small
question because if you do decide to leave your present situation, chances ate you'll
be seeking employment with a smaller firm.
         Here's why.
         A recent report published by Dun & Bradstreet said that companies with
fewer than 20 employees are expected to have created more than half of all new jobs


last year. And companies with between 20 and 499 people will have spawned another
third of new employment. Smaller companies will have generated approximately 2.5
million new jobs in 1995. At the same time, large corporations continue to downsize.
Dun & Bradstreet estimates that big companies (with more than 500 employees) will
create only slightly more than 1% of new jobs.
         What that means to you is that if you do leave your current position, the odds
are very good you'll be talking to smaller companies.
         Your friend's complaint about lacking staff and resources is commonly heard
from executives who've left a large corporation to join a smaller firm. Still, many
people who've made that switch find themselves enjoying a renewed sense of hands-
on involvement. They quickly learn to appreciate the lack of bureaucracy common in
big companies. Because smaller companies mean smaller staffs, each employee is
expected to contribute more. As a result, hours can be longer and demands greater.
You've had 45 people pulling together to accomplish your department's goals. With a
small company, you may find yourself doing the same work, but by yourself. And
while adjusting to that solo responsibility, you might also find yourself being asked to
lend a hand in the marketing of your smaller employer's products or services. Many
men and women leaving big business to work for smaller companies report a feeling
of satisfaction because of their direct involvement in the smaller company's future.
Rather company than having to go through many layers of management to reach the
ultimate decision maker, they find themselves in close proximity to the smaller firm's
president, needing only to pop in when they need an immediate decision.
        There is the parallel satisfaction of feeling like an entrepreneur without having
to take the ultimate risk of going into one's own business. The smaller company's
success will rise and fall with the collective efforts of just a few people, including
you.
        Chances are you'll be paid less by a smaller company. But while your base
pay might not match what you enjoyed at the big corporation, small firms offer
bonuses and stock options on performance. In many cases, a successful company will
end up paying seasoned executives more ill the long run than previous large
employers have paid. But, of course, if the smaller company doesn't prosper, neither
will you.
        Smaller companies need experienced executives like you to keep up with the
demands of their growth. Growth can be chaotic and rapid, creating the need to fill
positions quickly just to keep pace. This means not advertising as often, instead
filling positions through recommendations from others. Nowhere is the use of an
effective professional network as important as when you seek a job with a small firm.
        When a small company lands a new contract, it's often reported in the
newspapers or in a trade publication. This notice provides an opportunity for you to
let the management of that company know that you're available.
        Anew contract often means a need expand the staff. My advice is to be open
to every opportunity out there, whether it's a huge, multinational corporation with
billions in sales or six people who've found a niche and are committed to filling it.
/39/

                                                                                       87


      2.2.8 Текст “Programming by Example” (by Henry Lieberman)

        Henry Lieberman is a research scientist in the Media Laboratory at the
Massachusetts Institute of Technology in Cambridge, Mas.
        Avoiding the voodoo of conventional programming, users get personalized
solutions to one-of-a kind application problems that can be used over and over again.
       When I first started to learn about programming, many more years ago than I
care to think about, my idea of how it should work was that it should be like teaching
someone how to perform a task. After all, isn’t the goal of programming to get the
computer to learn and then actually perform some new behavior? And what better
way to teach than by example?
        So I imagined what you would do would be to show the computer an example
of what you wanted it to do, go through it step by step, and then have it try to apply
what you had showed it in some new example. I guessed that you’d have to learn
some special instructions that would tell it what would change from example to
example and what would stay the same. But basically, I imagined it would work by
remembering examples you showed it and replaying the remembered procedures.
        Imagine my shock when I found out how most computer programmers
worked. There were these things called “programming languages” that didn’t have
much to do with what you were actually working on. You had to write out all the
instructions for the program in advance, without being able to see whet any of them
did. How could you know whether they did what you wanted? If you didn’t get the
syntax exactly right (and who could?) nothing would work. Even after you had the
program, tried it out, and something went wrong, you couldn’t see what was going on
in the program. How could you tell which part it was wrong? Wait a second, I this
approach to programming couldn’t possibly work.
        I’m still trying to fix it.
        Over the years, a small but dedicated group of researchers came to feel the
same way I did, ultimately developing a radically different approach to programming,
called “programming by example” (PBE). It is sometimes also called “programming
by demonstration”, because the user demonstrates examples of the desired behavior
to the computer. A software agent records the interactions between the user and a
conventional “direct manipulation” interface and writes a program corresponding to
the users’ actions. The agent can then generalize the program so it works in other
situations similar to, but not necessarily exactly the same as, the examples on which it
was taught.
        This ability makes PBE like macros on steroids. Conventional macros are
limited to playing back exactly the steps recorded, making them brittle, because if the
slightest detail of the context changes, the macro ceases to work. Generalization is
also PBE`s central problem, the solution of which should enable PBE to replace
practically all conventional programming.
        Children might represent the first real commercial market for PBE systems.
They are not spoiled by conventional ideas of programming; for them, usability and
immediacy are paramount. That’s why it’s with children in mind that this special
section explores two notable PBE systems recently brought to market to enthusiastic


receptions from their initial users, many of whom are children. David Canfield Smith
and Allen Cypher`s Stagecast Creator, which evolved from Apple Computers`s
Cocoa and KidSim, brings rule-based PBE to a graphical grid world. And Ken
Kahn’s Toon Talk, a programming system that is simultaneously a video game, uses
a radically different programming model, as well as radically different user interface.
Toon Talk solves the problem of generalizing examples in a simple, almost obvious
way –by removing detail. The program is less specialized and therefore more
applicable in a wider range of situations.
        We also analyze PBE`s user requirements, examples of functioning PBE
systems, and directions for the future of PBE that hopefully all demonstrate the
power and potential of this innovative technology.
        One way PBE departs from conventional software is how it applies new
techniques from AI and machine learning. Incorporating these techniques represents a
tremendous opportunity for PBE but incurs the risk that the system will make
unwanted generalizations.
        We can’t convince people about PBE`s innate value unless we offer at least
some good examples of how PBE is being used in specific application areas. For
example, some researchers unite PBE and the Web – everybody’s favorite application
area today. The Web is a great focus for PBE because of its accessibility to a wealth
of knowledge, along with the pressing need foe helping users organize, retrieve, and
browse it all. Recent developments in intelligent agents can help- but only if users are
able to communicate their requirements to and control the behavior of their agents.
PBE is ideal. PBE can also be used to automate many other common but mundane
tasks that under conventional circumstances consume a frustratingly large fraction of
programmers` and users` time.
        SO, you may ask, if PBE is so great, how come everybody isn’t using it? PBE
represents a radical departure from what we now know as programming; it can’t help
but take a while before it becomes widespread, despite the existence of many systems
demonstrating its feasibility and value in improving applications in a variety of
domains. The conservatism of the programming community is the biggest obstacle to
widespread PBE use.
        Repenning and Perrone show how to make PBE more like human learning by
using analogy-an important intuitive cognitive mechanism. We often explain new
examples by way of analogy with things we already know, allowing us to transfer and
reuse old knowledge. They show how we can use analogy mechanisms to edit PBE
programs, as well as to create such programs from scratch.
        Finally, the researchers explore what at first might seem a crazy approach. We
have the computer simulate the users’ visual system in interpreting images on the
screen, rather than accessing the underlying data. Though it may seem inefficient, this
approach neatly sidesteps one of PBE`s thorniest problems-coexistence with
conventional applications. It enables what we call “visual generalization”, or
generalizing applications on how things appear to users on the screen, as well as on
the properties of the data.
        PBE is one of the few technologies with the potential for breaking down the
Berlin Wall that has always separated programmers from users. It allows users to
                                                                                      89


exploit the procedural generality of programming while remaining in the familiar user
interface. Users today are generally at the mercy of software providers delivering
shrink-wrapped, one-size-fits-all, unmodifiable applications. With PBE, they could
create personalized solutions to one-of-a-kind problems, modifying existing programs
and creating new ones, without going through the arcane voodoo characterizing
conventional programming. /36/

     2.2.9 Текст “Teachers and Technology: Easing the Way” (by Henry J.
Becker)

         As technology professionals, parents, and community members, how can we
help grade school teachers integrate technology into the classroom?
         Asking K-12 teachers to integrate networked computers into the classroom is
the biggest challenge we have given them in the last 200 years. Stridently
admonishing them to change in the media isn’t the way to help them make the
transition. It is our responsibility to create the workplace conditions that enable,
complement, and support teachers.
         Technology’s disruptiveness is not unique to education; it has caused all
manner of stress in professionals from accountants to zoologists. But non-teaching
professions have generally been interacting with technology for upwards of 20 years,
first automating, and now infomating (the term represents uses of technology that go
beyond the automation of paper-and-pencil practices and truly leverage
computational capabilities) their activities. They have had time to amortize the pain
of adjusting their work practices to take advantage of technological advances.
         It is only now that teachers are hitting the technology wall, which was
avoidable in the 1980s and 1990s. In the 1980s, technology was segregated from the
curriculum, and computer literacy courses were taught by “computer teachers”. In the
1990s, technology became supplemental to the curriculum. Textbook lesson plans
had annotations at the bottom of the page instructing teachers to have children play,
say, the simulation program called “Oregon trail” if time permitted. Well, there is
never time in the school day for extra things! Thus, teachers avoided dealing with
technology for another decade.
         But today we are asking teachers to integrate technology into the classroom.
Schools are creating technology skills requirements for students, and standards bodies
such as the National Council for the Teaching of Mathematics and the American
Association for the Advancement of Science are identifying technologies that need to
be incorporated into subject areas and activities (such as the use of computer-based
probes to measure the quality of water in a local stream or lake).
         We can’t place the burden of change solely on the backs of teachers. We
must try to identify and understand the conditions that enhance the use of computers
in the classroom, and develop strategies to create those conditions in our schools.
         Towards that end, this column covers a broad range of topics, from examining
technology teaching practices to describing school district policies that lead to
effective use of technology, from analyzing teacher technology preparation programs
to business strategies for delivering technology-based products to the classroom. Our



    
Яндекс цитирования Яндекс.Метрика