[ Pobierz całość w formacie PDF ]
.Tasktrackers regularly heartbeat small bits of information tothe jobtracker to indicate they re alive.Again, this isn t a source of pain for most ad-ministrators, save for the extreme scale cases.Client applications also do not commu-nicate directly with tasktrackers, instead performing most operations against the job-tracker and HDFS.During job submission, the jobtracker communicates with thenamenode, but also in the form of small RPC requests.The true bear of MapReduce isthe tasktracker traffic during the shuffle phase of a MapReduce job.As map tasks begin to complete and reducers are started, each reducer must fetch themap output data for its partition from each tasktracker.Performed by HTTP, this re-sults in a full mesh of communication; each reducer (usually) must copy some amountof data from every other tasktracker in the cluster.Additionally, each reducer is per-mitted a certain number of concurrent fetches.This shuffle phase accounts for a rathersignificant amount of East/West traffic within the cluster, although it varies in size fromjob to job.A data processing job, for example, that transforms every record in a datasetwill typically transform records in map tasks in parallel.The result of this tends to bea different record of roughly equal size that must be shuffled, passed through the reducephase, and written back out to HDFS in its new form.A job that transforms an inputdataset of 1 million 100 KB records (roughly 95 GB) to a dataset of one million 82 KBrecords (around 78 GB) will shuffle at least 78 GB over the network for that job alone,not to mention the output from the reduce phase that will be replicated when writtento HDFS.68 | Chapter 4: Planning a Hadoop Cluster Remember that active clusters run many jobs at once and typically must continue totake in new data being written to HDFS by ingestion infrastructure.In case it s notclear, that s a lot of data.1 Gb versus 10 Gb NetworksFrequently, when discussing Hadoop networking, users will ask if they should deploy1 Gb or 10 Gb network infrastructure.Hadoop does not require one or the other;however, it can benefit from the additional bandwidth and lower latency of 10 Gbconnectivity.So the question really becomes one of whether the benefits outweigh thecost.It s hard to truly evaluate cost without additional context.Vendor selection, net-work size, media, and phase of the moon all seem to be part of the pricing equation.You have to consider the cost differential of the switches, the host adapters (as 10 GbELAN on motherboard is still not yet pervasive), optics, and even cabling, to decide if10 Gb networking is feasible.On the other hand, plenty of organizations have simplymade the jump and declared that all new infrastructure must be 10 Gb, which is alsofine.Estimates, at the time of publication, are that a typical 10 Gb top of rack switchis roughly three times more expensive than its 1 Gb counterpart, port for port.Those that primarily run ETL-style or other high input to output data ratio MapReducejobs may prefer the additional bandwidth of a 10 Gb network.Analytic MapReducejobs those that primarily count or aggregate numbers perform far less network datatransfer during the shuffle phase, and may not benefit at all from such an investment.For space- or power-constrained environments, some choose to purchase slightly beef-ier hosts with more storage that, in turn, require greater network bandwidth in orderto take full advantage of the hardware.The latency advantages of 10 Gb may also benefitthose that wish to run HBase to serve low-latency, interactive applications.Finally, ifyou find yourself considering bonding more than two 1 Gb interfaces, you should al-most certainly look to 10 Gb as, at that point, the port-for-port cost starts to becomeequivalent.Typical Network TopologiesIt s impossible to fully describe all possible network topologies here.Instead, we focuson two: a common tree, and a spine/leaf fabric that is gaining popularity for applicationswith strong East/West traffic patterns.Traditional treeBy far, the N-tiered tree network (see Figure 4-3) is the predominant architecture de-ployed in data centers today.A tree may have multiple tiers, each of which bringstogether (or aggregates) the branches of another tier.Hosts are connected to leaf oraccess switches in a tree, which are then connected via one or more uplinks to the nexttier.The number of tiers required to build a network depends on the total number ofhosts that need to be supported.Using a switch with 48 1GbE and four 10GbE portNetwork Design | 69 Figure 4-3 [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • wpserwis.htw.pl