tag:blogger.com,1999:blog-314608522024-03-08T02:59:32.933-08:00Computing IntensiveDefying fanboism, revealing the truth.pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-31460852.post-62968432521274624202009-09-09T08:25:00.000-07:002009-10-28T06:35:12.198-07:00How to make Turbo Boost work under LinuxJust seen some funny accusation that Intel's Turbo Boost is broken, not working in Linux ... hence one need to turn of Turbo boost in running benchmarks ... and some wishes that <span id="SPELLING_ERROR_0" class="blsp-spelling-error">AMD's</span> implementation would 'fix' that bug ... and of course, this is from the AM<em><span style="color:#3366ff;"><span id="SPELLING_ERROR_1" class="blsp-spelling-error">FU</span></span></em><span id="SPELLING_ERROR_2" class="blsp-spelling-error">DZone</span> :)<br /><br />Well, <span id="SPELLING_ERROR_3" class="blsp-spelling-corrected">I'll</span> just list down some simple short rules for any OS to run Turbo boost here;<br /><ol><li>The said OS must support <span id="SPELLING_ERROR_4" class="blsp-spelling-error">ACPI</span></li><li>The BIOS in use must support <span id="SPELLING_ERROR_5" class="blsp-spelling-error">ACPI</span></li><li><span id="SPELLING_ERROR_6" class="blsp-spelling-error">EIST</span> and Turbo Boost BIOS option must be turned on</li><li><span id="SPELLING_ERROR_7" class="blsp-spelling-error">CState</span> should be set</li><li>The said OS must turn on its power management features, for both <span id="SPELLING_ERROR_8" class="blsp-spelling-error">Pstates</span> and <span id="SPELLING_ERROR_9" class="blsp-spelling-error">Cstates</span></li><li>The <span id="SPELLING_ERROR_10" class="blsp-spelling-error">Pstate</span> entries, P1 should be corresponding to the chip default frequency</li></ol><p>Here are the brief explanations. </p><p>Turbo boost is enter through <span id="SPELLING_ERROR_11" class="blsp-spelling-error">Pstate</span>-0. Thus the system (BIOS and OS) must support <span id="SPELLING_ERROR_12" class="blsp-spelling-error">ACPI</span> and turn those options on. Turbo boost is guarded by thermal and power headroom, enabling (deeper) <span id="SPELLING_ERROR_13" class="blsp-spelling-error">Cstate</span> would help CPU running at higher frequency because <span id="SPELLING_ERROR_14" class="blsp-spelling-corrected">headroom</span> are likely available.</p><p>The sixth requirement is not quite obvious. I have personally seen a Linux <span id="SPELLING_ERROR_15" class="blsp-spelling-corrected">variance</span> kernel debug check if this is not fulfilled.</p><p>A side note on this is that there are also people claim that CPU running at higher frequence unnecessarily. Actually this is untrue. It is again depends on the user's choice of power policy. Take the Windows XP for example, if an user choose the power scheme "Home/Office Desk". The said CPU(s) would run at Pstate-0 most of the time (except when entering enhanced Cstate which it would reduce to lower Pstate before idle). The CPU would be under Turbo Boost most of the time. But this make little different. If the given CPU does not support Turbo boost, it would be under its default max frequency with this setting anyway. If one has a concern on this, one could just use the power scheme "Portable/Laptop", like I did, even with a desktop system. Then when you are doing light work, your system would just run with lower Pstate, and enter Turbo during high load.</p><p></p><p>Then the usual accusation saying this waste power for server when it is idleing most the time ... wait, if one would have to enable Turbo Boost on server, and has concern with power ... should not one turn on the power saving policy so that enter lower Pstate on low usage ??? :) Anyway, I am not a system admin, not sure about the server power policy, feel free to correct me, either with your experience, or known data. If you were to share your viewpoint/guess, please add words like such as "think/guess", or put the statement in the form of question like I did :)</p><p>Then the even funny statement on why one would buy a CPU with Turbo Boost and should turn it off, because want <span style="color:#3366ff;">Consistent Results</span> or claiming <span style="color:#3366ff;">With turbo mode, the additional clock rates is not guaranteed</span> " ... and those are likely the same folks would turn on their CnQ :)</p>pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com9tag:blogger.com,1999:blog-31460852.post-91148864602748513702009-06-27T23:36:00.000-07:002009-06-28T08:02:24.530-07:00Debunking Turbo Boost FUDSome self-proclaimed technical professionals from AM(<span id="SPELLING_ERROR_0" class="blsp-spelling-error">FU</span>)<span id="SPELLING_ERROR_1" class="blsp-spelling-error">Dzone</span> is spreading FUD, again, on the Intel Turbo Boost. Some do raise intelligent doubts about it, while some do raise pure FUD, especially the moderator that I was having fun in my last post:) So, what I am trying to do here? Try explaining what the Turbo Boost really is while having some fun over them, again, of course! :)<br /><br />Intel Turbo Boost technology white paper is available from <a href="http://download.intel.com/design/processor/applnots/320354.pdf">http://download.intel.com/design/processor/applnots/320354.pdf</a> , and all my explanations are solely base on this document. Why bother repeat what has been documented? Because some questions about the Turbo mode need points from various place to explain that, and might need some minimal understanding about the platform architecture and terms.<br /><br />Intel® Turbo Boost technology automatically allows processor cores to run faster than the base operating frequency if the processor is operating below rated power, temperature, and current specification limits. Put it in another way, use available <span id="SPELLING_ERROR_2" class="blsp-spelling-error">headrooms</span> to operate at a higher frequency. Where do these <span id="SPELLING_ERROR_3" class="blsp-spelling-error">headrooms</span> (not talking about the possible down binning to fullfil demand, nor manufacturing guardband) come from? <span id="SPELLING_ERROR_4" class="blsp-spelling-error">Softwares</span> <span id="SPELLING_ERROR_5" class="blsp-spelling-error">exersize</span> the CPU differently and mostly will <span id="SPELLING_ERROR_6" class="blsp-spelling-error">excersize</span> the CPU at the base operating frequency with some headroom left in the rated power/temperature/current. On top of that, if software(s) does not take up all cores, some cores will go into idle/inactive state (C3 and below for <span id="SPELLING_ERROR_7" class="blsp-spelling-error">Nehalem</span>) and this further creates more headroom available for the boost. Let's look at this FUD from the <span id="SPELLING_ERROR_8" class="blsp-spelling-error">FUDZone</span>:<br /><br /><a href="http://www.amdzone.com/phpbb3/viewtopic.php?f=52&p=161060&sid=9b73221179a3b87c1ae0acf4175e34e4#p161060"><em><span id="SPELLING_ERROR_9" class="blsp-spelling-error">AMDZone</span>.com • View topic - <span id="SPELLING_ERROR_10" class="blsp-spelling-error">AMD's</span> <span id="SPELLING_ERROR_11" class="blsp-spelling-error">Magny</span> <span id="SPELLING_ERROR_12" class="blsp-spelling-error">Cours</span> Architecture revealed</em></a><em>: <span style="color:#ff0000;">"So in order to accelerate single-threaded performance, ALL CORES are dissipating more power"</span></em> Clearly this guy does not know what he is talking about or spread FUD in purpose. Per the <span id="SPELLING_ERROR_13" class="blsp-spelling-error">whitepaper</span> description, if the said software was to trigger the single threaded boost, the other cores were to be in inactive state at the first place, thus, how are they going to dissipating more power? :) When a core is in C3, it consumes very little power, and it would be even less for C6.<br /><br /><a href="http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136451&st=0&sk=t&sd=a&start=150"><span id="SPELLING_ERROR_14" class="blsp-spelling-error">AMDZone</span>.com • View topic - <span id="SPELLING_ERROR_15" class="blsp-spelling-error">AMD's</span> <span id="SPELLING_ERROR_16" class="blsp-spelling-error">Magny</span> <span id="SPELLING_ERROR_17" class="blsp-spelling-error">Cours</span> Architecture revealed</a>: <span style="color:#ff0000;">"So although CPU may have thermal detector to throttle its own clock rate, but the attempts to exceed <span id="SPELLING_ERROR_18" class="blsp-spelling-error">TDP</span> will affect the whole system as the extra heat must be dissipated to the environment. I believe this is why Turbo mode is turned off in many/most IT and <span id="SPELLING_ERROR_19" class="blsp-spelling-error">datacenter</span> environments." <span style="color:#000000;">More FUDs here :) How is the Turbo Boost exceeding the <span id="SPELLING_ERROR_20" class="blsp-spelling-error">TDP</span> here? It is by <span id="SPELLING_ERROR_21" class="blsp-spelling-corrected">definition</span> to obey the <span id="SPELLING_ERROR_22" class="blsp-spelling-error">TDP</span> (utilizing the power/temperature/current <span id="SPELLING_ERROR_23" class="blsp-spelling-error">headrooms</span>). And wow, this guy 'knows' for the fact that Turbo mode is turned off in many/most IT and <span id="SPELLING_ERROR_24" class="blsp-spelling-error">datacenter</span> environments!! </span></span><br /><span style="color:#ff0000;"></span><br /><a href="http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136451&st=0&sk=t&sd=a&start=150"><span id="SPELLING_ERROR_25" class="blsp-spelling-error">AMDZone</span>.com • View topic - <span id="SPELLING_ERROR_26" class="blsp-spelling-error">AMD's</span> <span id="SPELLING_ERROR_27" class="blsp-spelling-error">Magny</span> <span id="SPELLING_ERROR_28" class="blsp-spelling-error">Cours</span> Architecture revealed</a>: <span style="color:#ff0000;">"A clean implementation of such idea should take <span id="SPELLING_ERROR_29" class="blsp-spelling-error">TDP</span> restriction and software <span id="SPELLING_ERROR_30" class="blsp-spelling-error">controllability</span> into account. The latter is favorable to server/<span id="SPELLING_ERROR_31" class="blsp-spelling-error">worktation</span> environment so a system reboot is not needed to enable/disable the feature. In addition, it might also be preferable to implement better thread/process affinity to cores, so the running threads/processes won't jump across different cores excessively."</span> the first part is <span id="SPELLING_ERROR_32" class="blsp-spelling-corrected">especially</span> funny when compare to the second quote later. For the software control to enable/disable the feature without reboot, without touch any internal information, this feature could be <span id="SPELLING_ERROR_33" class="blsp-spelling-corrected">implemented</span> currently with <span id="SPELLING_ERROR_34" class="blsp-spelling-error">ACPI</span> (hint: read the spec on <span id="SPELLING_ERROR_35" class="blsp-spelling-error">Pstate</span> related method) but I do not think any <span id="SPELLING_ERROR_36" class="blsp-spelling-error">OEM</span> doing that as why to allow that at <span id="SPELLING_ERROR_37" class="blsp-spelling-error">runtime</span> anyway. Now let's have fun on the first part:<br /><br /><a href="http://www.amdzone.com/phpbb3/viewtopic.php?p=160965#p160965">http://www.amdzone.com/phpbb3/viewtopic.php?p=160965#p160965</a>: <span style="color:#ff0000;">"Lets take a slightly different point of view and ask what would it take for <span id="SPELLING_ERROR_38" class="blsp-spelling-error">AMD</span> to implement "Turbo Mode"? What mechanisms are missing?<br />...<br />For <span id="SPELLING_ERROR_39" class="blsp-spelling-error">AMD</span> to implement something like the Turbo mode, two extra things are needed. (1) An on-die thermal sensor that accurately detects core temperature. (2) The "reverse" <span id="SPELLING_ERROR_40" class="blsp-spelling-error">CnQ</span> driver that set target P-state to ones that are above the default max. According to <span id="SPELLING_ERROR_41" class="blsp-spelling-error">AMD's</span> latest revision guide, the thermal sensor problem in Phenom is already fixed in the Phenom II revision, so requirement 1 is met. What is needed is then BIOS/driver support for one or two extra P-states above the current default maximum.<br /><br />Implemented correctly, it should be as reliable as the <span id="SPELLING_ERROR_42" class="blsp-spelling-error">CnQ</span> (probably more reliable than Intel's Turbo Mode). I suppose <span id="SPELLING_ERROR_43" class="blsp-spelling-error">AOD</span>/3 is already doing something similar. We know how good <span id="SPELLING_ERROR_44" class="blsp-spelling-error">AMD's</span> <span id="SPELLING_ERROR_45" class="blsp-spelling-error">CnQ</span> is, and how easy it is to active/support it. I really don't see such "self-overclocking" is much a big deal."</span> per what he described, <span id="SPELLING_ERROR_46" class="blsp-spelling-error">AMD</span> just need to measure the temperature (while Intel's measuring of power/temperature/current is not enough:), which clearly in correlation to the <span id="SPELLING_ERROR_47" class="blsp-spelling-error">TDP</span> <span id="SPELLING_ERROR_48" class="blsp-spelling-error">btw</span>)<br /><br />enough talking about the FUDs, let's look at the other intelligent/unintelligent doubt there (at least not totally ill-intention).<br /><br /><a href="http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=136452&p=161049#p161049"><span id="SPELLING_ERROR_49" class="blsp-spelling-error">AMDZone</span>.com • View topic - Valencia and 16-core <span id="SPELLING_ERROR_50" class="blsp-spelling-error">Interlagos</span> are based on Bulldozer!</a>: <span style="color:#ff0000;">"This feature is really ONLY about the benchmarks. Having a chip clocked at it's highest and then <span id="SPELLING_ERROR_51" class="blsp-spelling-error">underclocking</span> is a much better design."</span><br /><br /><a href="http://www.amdzone.com/phpbb3/viewtopic.php?p=160964#p160964">http://www.amdzone.com/phpbb3/viewtopic.php?p=160964#p160964</a> : <span style="color:#ff0000;">"WHY would they bother?<br /><br />FIRST:<br /><br />Actually I see <span id="SPELLING_ERROR_52" class="blsp-spelling-error">TurboBoost</span> as being "kind of" dishonest. If the chip can run faster and has passed testing and validation at the higher speed, then why isn't it just clocked higher? It is easy to see that a much more eloquent and cleaner design would be to go ahead clock it at the fastest it can be tested and validated at and then put in a mechanism to <span id="SPELLING_ERROR_53" class="blsp-spelling-error">downclock</span> cores independently if they are not needed. OH WAIT WE ALREADY HAVE THAT.<br /><br />Thus: With the availability of dynamic <span id="SPELLING_ERROR_54" class="blsp-spelling-error">underclocking</span> the only reason for <span id="SPELLING_ERROR_55" class="blsp-spelling-error">TurboBoost</span> to exist is as a marketing gimmick. It might fool some people, and cause others to ignore the truth, but in reality it isn't that useful.<br /><br /><span style="color:#000000;">This is all about binning. For most of the shipped products, I do not think that that said CPU could run at the higher frequency while maintain its <span id="SPELLING_ERROR_56" class="blsp-spelling-error">TDP</span> category and reliability period. Beside, right now in the market it is fused as 2/1/1/1 <span id="SPELLING_ERROR_57" class="blsp-spelling-corrected">frequency</span> bin boost, in the future there could be W/X/Y/Z, with W much bigger than the Z. The said chip <span id="SPELLING_ERROR_58" class="blsp-spelling-corrected">definitely</span> cannot be marked with frequency W as the base operating frequency. It cannot marked with frequency Z either if you <span id="SPELLING_ERROR_59" class="blsp-spelling-corrected">understand</span> that some software <span id="SPELLING_ERROR_60" class="blsp-spelling-corrected">exercise</span> the CPU more that the other software. Turbo would be useful under that 'the other software environment' without breaking the <span id="SPELLING_ERROR_61" class="blsp-spelling-error">TDP</span> category.</span></span><br /><span style="color:#ff0000;">SECOND:<br /><br />Something even better than <span id="SPELLING_ERROR_62" class="blsp-spelling-error">TurboBoost</span> would be to allow the system to overclock differently for various specific applications. The user should be able to specify how much to overclock and set other parameters such as voltage. OH WAIT WE ALREADY HAVE THAT. But of course since it's not in microcode or in the bios then many people don't consider that "allowed" for benchmarking. And these same people will adamantly insist that TB be allowed for benchmarking because "real system performance" is more important than clock per clock comparisons. But that same argument can used to defend TB also applies to using tools such as <span id="SPELLING_ERROR_63" class="blsp-spelling-error">AOD</span>. (Personally I think BOTH should not be used for comparative benchmarking. Comparative means we want to KNOW how they compare. Dynamic features only confuse the issue.)<br /></span><br /><span style="color:#ff0000;"><span style="color:#000000;">I am not too sure why one could claim this, as the hardware assit TB can help 'OC' safely by measuring the CPU headroom, while the recommended approach actually user setting for each application which might or might work properly. I do like the AOD profile ability for certain usage, of which i won't describe here :)</span><br />THIRD:<br /><br />And one of the most relevant points for SERVERS: Most experienced administrators do NOT want added complexity in their systems. Period. Especially when it only adds another point of failure. And as others have mentioned: The small amount of "the best performance at all costs" people don't make up a large enough demographic to spend time and money on." </span><br /><br />No experience in server administration and thus i won't comment much on this, but i believe this is lame excuse because there are way more other features add more complexity to the system than the Turbo mode, which merely dynamically changing frequency which until recently already implemented in the reverse direction (SpeedStep).<br /><br />Ok, enough of the Q&A and fun:).pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com0tag:blogger.com,1999:blog-31460852.post-18104737876750327522008-07-03T22:52:00.000-07:002008-07-08T10:25:50.973-07:00FanboismIt has been quite a while since my last post. It is actually more than a year! I have been commenting mostly in Roborat64's blog instead of working of any of own posting. I also tried once posting in AMDzone using the name p4nee (some special meaning in Chinese :)) teasing on their <span style="font-weight: bold;">double standards</span> , but got banned at 7th post, because i returned back a same word to some one that implied on me. I have stopped posting since after, why I have to let those fanbois control my ability to post?<br /><br />Nevertheless, I still visit that site for some jokes (believe me, they are! :)), and once a while Scientia's blog too which is getting less commenter now. Some of the jokes there are just too outstanding and thus I decide to keep them here. Below is the first one, and there will be more to come when i have time to dig out some of their older thread or find some new one. Enjoy! :)<br /><br />From <a href="http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=135267&st=0&sk=t&sd=a&sid=0fdd4c0925ef854c681a1378405fb5fa&start=25">AMDZone</a><br /><br />by <strong><a href="http://www.amdzone.com/phpbb3/memberlist.php?mode=viewprofile&u=20725&sid=9ee814f199bba4b5cb570d57059cc215">abinstein</a></strong> on Thu Jul 03, 2008 2:00 am<br /><br /><span style="color: rgb(0, 153, 0);">Maybe I'm just naive but I don't think nVidia's problem is AMD's gain. AMD's #1 enemy is Intel, which anyone with a clear mind knows that it plays leaps and bounds dirtier than nVidia or any company on earth. If nVidia becomes weaker, it will bow lower to Intel's monopoly force, which in the end hurts AMD and the whole industry.</span><br /><br /><span style="color: rgb(0, 153, 0);"><span style="font-weight: bold;">X86 is an instruction that should've been gone long ago but got life-supported by Intel's monopoly tactics. Now Intel's trying to put x86 into graphics? Please guys, if not for x86, with the same engineering effort the industry has put into PC, we could've been running 4-5GHz Power6-like CPUs on our desktops!</span> Lets still hope nVidia and its GPGPU gets enough momentum to stop Larrabee.</span><br /><br />by <strong><a href="http://www.amdzone.com/phpbb3/memberlist.php?mode=viewprofile&u=20725&sid=9ee814f199bba4b5cb570d57059cc215">abinstein</a></strong> on Thu Jul 03, 2008 4:06 am <div class="content"><!-- <hr /> --><blockquote style="color: rgb(153, 153, 0);"><div><cite>Woofermazing wrote:</cite>My memory is pretty vague, but wasn't Intel planning on having Itanium filter down to the desktop, and then AMD foiled that with the Athlon 64?</div></blockquote><br /><span style="color: rgb(0, 153, 0);">I believe AMD supported x86 for a good reason: they got a very good implementation (at that time) of the ISA, K7, which runs faster and scales better than any other implementation including Intel's P6 at that time. With any other instruction set (Power, MISP, EPIC, ...) AMD would've been non-competitive at all.</span><br /><br />...</div>pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com3tag:blogger.com,1999:blog-31460852.post-637276125285492782007-02-26T07:16:00.000-08:002007-02-26T08:50:13.394-08:00PCI, Torenza , CELL, Network Processor and FusionThis won't be a lengthy article as I don't usually spend long time writing a blog :)<br /><br />Torenza is a good technology, but it is not a totally new technology. Every functions that it tries to provide, exist today. Co-processing existed on all sort of interfaces, with PCI being the most common one. Those co-processing, don't require low latency access by/to the CPU, bandwidth is one single biggest factor. Torenza adds low latency into the picture, which indeed will create a new frontier of co-processing for application requiring low latency at system level wise. However, while it claims to be open, it is not as 'open' as PCI and alls its derivatives. PCI specs are easily available with full details of information, and it is guarantee to be free on the entire technology. Geneseo will fill in the gap for low latency co-processor interfacing and will see a much wider technology adoption, even by being late into the game play. Todays' economy is economy of scales. Whose technology is most open, most used, most industries back up, will win.<br /><br />While all these are trying to provide more co-processing power at system level, internal dynamic co-processing chip designs already existed or soon to be exist. Perhaps CELL is the most known example of this, where the coprocessing requirement, can be dynamically programmed. I have a strong feeling that the concept were from its own network processor, which IBM exited the business few months before the CELL launched. Intel still sells its IXP range of network processors.<br /><br />Fusion, to me is just a funky name used for marketing purpose. It is another level down, no dynamic co-processing at runtime, but at design/manufacturing time. It provides jigsaw puzzle ability to AMD to pick and match component within a silicon. Anyway, this technology is not new at all, a single silicon today can actually be packaged into multiple chips SKU, by fusing (or derivative design) some of its components within the silicon. All AMD plans to do on this is to add GPU (and of course some changes to make it homogeneous system wise). AMD is smart enough to target the mobile platform first, which the power is its key strength. <br /><br />After the internal dynamic co-processing, may be the next step is FPGA within a chip, which will provide even dynamic co-processing nature.<br /><br /><br /><br /><br /><span style="font-size:-1;"> </span>pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com1tag:blogger.com,1999:blog-31460852.post-35635999789751328022007-02-22T01:02:00.000-08:002007-02-22T07:40:13.789-08:00Barcelona's code name is not K10So many people were trying to guess the correct code name for the AMD's coming Barcelona processor. INQ (and later those fanbois Sharikou, Scientia and alikes) wished to call it K10. Well, it is not K10.<br /><br />Guess what, K10 is still under development, so it can't be K10. K9 is just a bad name.<br /><br />Again, Barcelona's code name 'is not' K10, period.<br /><br />p/s: K8L is more likely referring to Barcelonapointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com6tag:blogger.com,1999:blog-31460852.post-1166024414875218022006-12-13T06:07:00.000-08:002006-12-13T07:40:14.896-08:00The Secret of MegaTasking - Revealed!Just had a chance to talk to Henry from the A company, to understand the funky terms - MegaTasking ...<br /><br />Q: What is multitasking?<br />A: <a href="http://www.answers.com/main/ntquery?s=multitasking&gwp=13">The running of two or more programs in one computer at the same time.</a><br /><br />Q: Then what is the different between MegaTasking and multitasking? <br />A: MegaTasking has some similar sense with the multitasking, but it is more than just computing?<br /><br />Q: Can you elaborate further?<br />A: Sure! .The MegaTasking is about convergence. While our competitor is/was talking about the computing, networking and communication convergence, we aim much higher. It converge almost everything for your daily life stuff. For what you do in the living room, kitchen, laundry, neighbor, society, lan party, (342 words omitted). With our sophisticated design, advanced process, brilliant individuals, smart executives, enormous fanbois base, (1178 words omitted) ...<br /><br />Q: So ...? What is it has anything to do with the living room? are you talking about your 'Live' stuff?<br />A: Nope, more than that.<br /><br />Q: Is that so?<br />A: Yup, the moment you turn on the MegaTasking in a living room, you got a PC and heater.<br /><br />Q: What if it is summer?<br />A: Err ... you just turn your living room to a Sauna Spa.<br /><br />Q: ... then the kitchen?<br />A: With a proper casing, you just got yourself an oven. You can look at the screen for the recipe and bake the cake at the same time!<br /><br />Q: ... ... then the laundry?<br />A: Just put your wet clothes close to the fan, it will instantly dry. Better than most of the commercially available clothes drier!<br /><br />Q: ... ... ... then the neighbor?<br />A: What's more fun than directing the noise to your stupid neighbor that use our competitor's product? I'm sure our fanbois base would love this.<br /><br />Q: ... ... ... ... and anything else?<br />A: What's even more fun than 'legally' disturb your opponent with noise and heat in lan party game competition? <br /><br />Q: ... ... ... ... ... and some more?<br />A: The MegaTasking is a innovation for someone innovative. Think of it for few minutes, i'm sure you can list out more than what i have said.<br /><br />Q: ... ok, anything else to say?<br />A: Yup, our MegaTasking is fast.<br /><br />Q: How fast?<br />A: It can consume 1 MegaWatt in just 41 days consider full day usage. Our competitor not even close to that.<br /><br />Q: What about the computing speed?<br />A: Sorry, I gotta take a whiz ... bye.pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com3tag:blogger.com,1999:blog-31460852.post-1165575646252169442006-12-08T02:56:00.000-08:002006-12-08T03:00:46.263-08:00Another Joke of the DayQuote from <a href="http://www.newsfactor.com/story.xhtml?story_id=013001BYD5Z4">http://www.newsfactor.com/story.xhtml?story_id=013001BYD5Z4</a><br />In making the announcement, AMD executives said that <b> even at 90 nanometers and 90 watts, its chips, on average, consume half the power of an Intel Core 2 Duo.</b> But Athlon's power consumption will drop even further, AMD said, with the 65-nm chips that will run at an average 65 watts.<br /><br />Wow, the AMD executive hiring criteria is able to lie without the eye blinking? :)pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com0tag:blogger.com,1999:blog-31460852.post-1163170429488410922006-11-10T06:41:00.000-08:002006-11-11T00:15:57.556-08:00Funny Q&AI just come across this funny Q&A that I really can't resist to put it into my blog, so that may be some one can decode what the answer 'really' means :)<br /><br />excerpt from the <a href="http://www.theinquirer.net/default.aspx?article=35626">INQ interview</a> with the AMD guys<br /><br />INQ "That brings the question of drivers. AMD has been a staunch supporter of Linux, while many users of ATI had a hay-ride with drivers for Linux operating system. Nvidia, as the prime competitor has support of Linux community, while every once a while we hear news about petitions to ATI, drivers not working as intended."<br /><br />Phil "AMD is driving the industry to an open world, and we focus our strengths and with combined approach, achieve what's best for development of the industry around us. All ISVs are important for us."<br /><br />Btw, just had a conversation with my friend, asking him how he feels about the current national politics, and he said that that his son is almost 3 year old and ask me how's my son. I answered that the table is made out of solid wood.<br /><br />Found another jokes from <a href="http://www.dailytech.com/article.aspx?newsid=4897&ref=y">dailytech</a><br /><br />When asked if AMD has any concerns that its users may choose Intel processors if supplies of AMD chips run dry, DiFranco responded, "We don't expect our users to jump brand. Their loyalty comes from many years of dedication, and they're a sophisticated group. We think they will stay loyal over the long term; they're better served by sticking with AMD technology."<br /><br />What a marketing guy! Anyway, a side thought here: if most AMD executive think the same, then AMD is soon to be in trouble, as one commenter at that site has said my mind:<br /><i>and isnt that the same mentality that hurt intel? I love AMD but intel is back for now so why would i stick to amd in the coming time.</i>pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com1tag:blogger.com,1999:blog-31460852.post-1163007676782531832006-11-08T08:47:00.000-08:002006-11-08T09:41:16.796-08:00Web 2.0 Impact to the Computer IndustryFirst and foremost, I hate the term "Web 2.0", simply because it is trademarked. It is simply stupid. Tim should have disallowed that. Anyway, just put this aside and talk on what I think about the impact of this Web 2.0 to the computer industry.<br /><br />People would easily argue that with the advancement of the Web 2.0, thick client is no longer needed, thin client and powerful server is the future. Well, to some extent, this is true. This would translated to increase server demand, better broadband connectivity.<br /><br />However, the above deduction is just simply too simplistic. Let's use the definition from the <a href="http://en.wikipedia.org/wiki/Web_2.0">http://en.wikipedia.org/wiki/Web_2.0</a> and real life example to illustrate this.<br /><br />The Web 2.0 characteristics as listed in the Wiki is as follow<br /> 1) "Network as platform" — delivering (and allowing users to use) applications entirely through a web-browser.<br /> 2) Users owning the data on the site and exercising control over that data.<br /> 3) An architecture of participation and democracy that encourages users to add value to the application as they use it.<br /> 4) A rich, interactive, user-friendly interface based on Ajax.<br /> 5) Some social-networking aspects.<br /><br />The first deduction that we made in the second paragraph would be true for the point #1. Google or even Microsoft would (eventually) enable some office application through the webs, be it in Internet or more powerful version through a company's Intranet. Most of the day to day jobs, be it business or engineering work can be done through the network(for the engineering case, i mean remote session to a server here)<br /><br />The point #2, and #5 however, take YouTube as an example, still has room for the thick client. A powerful client would allow users to encode their video and to some extent add in some funky stuff instead of pure video encoding, at a more comfortable speed. Besides that, wireless broadband will be a hit as users able to upload their contents, at anywhere, and anytime. Mobility is also key here and hence, mobile device will be a plus because of the Web 2.0. In this sense, computer industry has to fight with the phone industry. Computer industry has to make use of its much higher processing ability to create contents that's not so possible to be done in the phone at the same period of time. Complex but easy to use video editing tools will be a point here.<br /><br />To some extent, I believe online gaming provided some Web 2.0 characteristics. It provides interactive meeting place, creative expressions ability, and some even allow user to have some effect on the game scene, etc and I'm sure newer game would have more features that what i listed here. The 'virtual world' will definitely need a powerful client for the better 'virtual' experience. Unless broadband bandwidth can increase so dramatically, those virtual scene, etc, would still need to be handled by the client.<br /><br />Web 2.0 will not cast doom to the thick client. Instead, it can be another inflexion point to all segment of the computer industry, a boom to then thin and thick client, as well as the server.pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com0tag:blogger.com,1999:blog-31460852.post-1156403987699380762006-08-23T23:57:00.000-07:002006-08-24T00:27:18.643-07:00The UNTOLD Reason why AMD will have to go with Native QuadcoreWhile Intel is releasing its quadcore soon (expecting Q42006), AMD will release its native version of quadcore in 2 or 3 quarter later than Intel. AMD (and its fan) also tried to play down Intel's non-native version, and claiming the native approach is better.<br /><br />What people failed to realize what if AMD using Intel's approach before come out with the native version, the end result will be a Internally NUMA quad core, which is bad for mobile, desktop, or even as a NUMA node for the server MP. So, it is really not a matter native if better than not native for AMD, it is just that the non-native approach is NOT good for AMD.<br /><br />Why bad? As of current (and foreseeable future), there is not much (if not any) apps get wriiten for the NUMA optimization. And most desktop/laptop apps doesn;t require that level of memory bandwidth (the NUMA has better bandwidth but with a catch - need software optimization that doesn't work in all workloads). NUMA is a sensible thing in server MP, not in desktop or laptop. Having 2 memory link also raise the system cost, making it unsuitable for cost concern market.<br /><br />For mobile specifically, it is mainly driven by form factor, power and wireless. 1P is definetely as solution for it. Having NUMA within the 1P, it just means it need minimum of 2 dimm and might not be a good candidate for certain very small form factor mobile device. The unncessary memory bandwidth in most mobile application also causing the power to be up, while not gurantee significant improvement. (I'm not sure if certain apps would show negative improvement)<br /><br />For server MP particulaly, if this internally NUMA chip is used as a NODE, there will be multiple node distance in the whole MP design, which again making the software optimization harder.pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com9tag:blogger.com,1999:blog-31460852.post-1153917989813337552006-07-26T05:33:00.000-07:002006-07-26T05:57:48.033-07:00Da Vincci Hinted about AMD-ATI mergerAn internet <a href="http://computing-intensive.blogspot.com/2006/07/da-vincci-hinted-about-amd-ati-merger.html">site </a>reported that an unseen Da Vincci manuscript was found yesterday and to the surprise of the researchers that the AMD+ATI merge was predicted by Da Vincci hundreds years ago. The below words were stated numerous occasion within that manuscript<br /><br /><a href="http://www.legitreviews.com/news/2476/">DAAMIT</a><br />I.AM.TAD<br />I.AT.MAD<br />AIM.TAD<br />MAD.AT.I<br /><br />:)pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com0tag:blogger.com,1999:blog-31460852.post-1153556157463942992006-07-22T00:31:00.000-07:002006-07-22T01:15:57.506-07:00IMC MythThere seems to be an overhype on the IMC within the x86 CPU. I am not bashing on the goodness of having IMC and thus lower the memory latency, but my point is that it is simply <span style="font-weight: bold;">overhype</span>. Every single piece of feature within a CPU, is an engineering decision in one way and another. The main focus are the overall system performance and the target platform usage models.<br /><br />Intel's new Core 2 Duo, is wothout the IMC, and yet still out perform the AMD's K8 with IMC, speaks for itself. It is not that Intel will not use IMC, it is just that it does not need it <span style="font-style: italic;">yet</span>.<br /><br />Then there are people argue about the scalability (about the NUMA vs UMA), saying that the C2d would not scale as good as AMD's; and in 2 years time , AMD's CPU might take the lead again due to this. Well, i wouldn't disagee on this scalability issue and the <span style="font-weight: bold;">future possiblity </span>of AMD taking the lead again. But who cares? As a desktop and laptop user, If I were to buy a decent system nowaday, I'd defintely go for Intel's C2D, at least for now. The scalability issue is not my issue, but the Intel's architect and design engineer issue. Some might want to further question this: "<span style="font-style: italic;">yes, intel can raise the FSB frequency and enhance cache design for 2 core or possible 4 cores, but it surely hit bottleneck when it designs 8 core and above"</span>. Well, it is not a user's concern. It is again their design team's concern on how to over come this, be it using IMC or other method.<br /><br />Wait a minute, what about MP? As far as my desktop and laptop concern, I will not be using one , and again, at least in this few years time. Why should I incur such rediculous hardware cost and possibbly the softwatre cost while a decent single multicore processor can do the job?<br /><br />Having said all that, AMD with its IMC and ccHT (hence NUMA) did give it an advantage at the 4P and above server end. The IMC is definitely not a case at the desktop and laptop end as of now, and not a case in 2P server as well because the dual FSB chipset is available.pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com12tag:blogger.com,1999:blog-31460852.post-1153500461065499902006-07-21T09:01:00.000-07:002006-07-21T09:54:11.896-07:00From Fanboism to ExtremistIntel has just started its marketing campaign around its Core 2 Duo few months back, and that's exactly the time I started to participate in the blog commenting. There were occasionally funny comments appearing in technology news feedback, protraying fanboism if not extremist :)<br /><br />After sending my comment to this <a href="http://sharikou.blogspot.com">sharikou.blogspot.com</a> today, then I have a though why not I start my own blog and to express my view on those fanboism comments. Of course, I might write my thought on the technology as well, especially regarding to the computer industry.<br /><br />AMD fanboys, to the extreme, Sharikou, likes to disagree on whatever Intel does, discredit is ability, bad name it, and most of the time, make funny judgements. While i would not comment on his personal view, but will try to prove some fact here by using logic, in funny way :)<br /><br />Sharikou think that AMD manufcaturin capability is far superior than Intels. He would Intel has bad yield as compared to AMD, use 65nm with unmatured yield, etc. The list is quite long, if you are really interested, can visit his blog.<br /><br />Below is the logic to prove him wrong<br /><br />quote from <a href="http://www.informationweek.com/hardware/showArticle.jhtml?articleID=190900525&subSection=Processors">informationweek</a><br /><i>In some cases, executives said, AMD walked away from business when price points became so low the deal was deemed unprofitable.<br />Henri Richard, AMD's executive vice president of worldwide sales and marketing, said AMD would only take business that makes sense for the company. "We are not going to chase what I call lighting a cigarette in front of a gas leak," he said.<br /></i><br />CPU prices from below<br /><a href="http://www.hkepc.com/bbs/itnews.php?tid=633569">http://www.hkepc.com/bbs/itnews.php?tid=633569</a><br /><a href="http://www.hkepc.com/bbs/itnews.php?tid=632181&starttime=0&endtime=0">http://www.hkepc.com/bbs/itnews.php?tid=632181&starttime=0&endtime=0</a><br /><br />Minimum AMD CPU price is USD51<br />Minimum Intel CPU price is USD39<br /><br />Please allow me to use my limited logic analysis here:<br /><br />1) AMD is a GOOD company, and will do good thing for humanity<br />2) Selling CPUs is to gain money, no matter how little, is true for both Intel and AMD<br />3) Hector is good man and won't lie<br /><br />Assume everyone want to make at least 10% for profit but AMD can't push it below USD 51. So, i'll assume its low end CPU cost is USD 46, and intel low end CPU cost is USD 35.<br /><br />Should AMD make more than 10% on it and so that can prove Shariko's point AMD's APM is far more superior than Intel's Copy exactly? it can't without violating point number 1 and 3. Since AMD is so good and supportive to the humanity, of course AMD would support USD100 PC initiative. The key is low CPU cost. A GOOD AMD would definitely sell cheaper CPU when it can and a good Hector would not lie.<br /><br />So, can intel actually selling at lost? it can't either since Sharikou think that intel is so evil and thus it would simply will not make its CPU to support the PC USD 100 initiative or to sell it at lost.<br /><br />So, the conclusion is what the industry has recognised except for Sharikou, his fren mike and the AMD marketing VP, Intel has far more superior manufacturing :)<br /><br />Btw, in his post, there are endless joke. From the Dell Laptop explosion to be cause by the Intel CPU (he managed to related the 2 explosion sound to dual core ...), to <a href="http://sharikou.blogspot.com/2006/06/intel-may-bankrupt-in-seven-quarters.html#links">Intel will bankrupt in 7 quarters.<br /></a><br />Anyway, I'm not predicting AMD to gobankrupt here and I believe AMD will continue to be a strong competitor to Intel, despite the fact that Intel is taking the lead currently.pointerhttp://www.blogger.com/profile/17388854963223201475noreply@blogger.com6