Processors
Intel® Processors, Tools, and Utilities
14402 Discussions

I would like to see...

idata
Employee
3,763 Views

Intel to make 6-core (physical) a desktop CPU standard.

And to remove all those Technologies related to graphics (MMX, SSE, all and the like) including the built-in graphics.

Just include those related to speeding up the processor and security like Execute Disable Bit, while graphic card manufacturers can really focus on creating a new type of card that coordinates better with the processor.

I would say that all those graphic technologies including the built-in one really gets in the way of fully actualizing the capabilities of the graphic card.

Imagine the processor doing some of the graphic processes then the graphics card processes some again those of unfinished images and frames in the system. It's very redundant that it actually misses those objects while playing a game like StarCraft 2, crashes in Crysis 2 and hangs up in Autodesk Maya 3D.

Lets face it man its true and it happening right now as you read this. And I want this post to reach the processor engineer team so that they will realize that those tiny glitches see even in their social account sites can be attributed to those unfinished, unprocessed, overlapping and could be even 'missing details' of the picture they see.

Then I would be the first one to try that finished product.

0 Kudos
17 Replies
idata
Employee
919 Views

Like for example this image here that shows the icon setup-dvd-ripper-lite.exe, which is supposed to be a circular icon with the graphics inside.

In 'List', 'Details', and 'Small Icons' View, the icon looks like this:

But when viewed in 'Tiles', 'Medium' and 'Large', the image is fine.

Wonderfox Dvd Ripper Lite can be found here, http://www.MajorGeeks.com www.MajorGeeks.com

Is Intel really doing its job???!

0 Kudos
RGiff
Honored Contributor I
919 Views

The I7-980 is a 6 core without built in graphics.

0 Kudos
idata
Employee
919 Views

I see, but it did not solve my problem.

As I would suggest, the inappropiate integration of graphics of Intel to it motherboards and processors might be causing the distortion.

The whole architecture and technologies of this processor is outstanding, not to mention the complete drop of MMX technology usage for this processor. However the use of Instruction Set Extensions SSE4.2 is not generally recommended as I oppose to.

That is the redundancy that I refer to. Pretty annoying isn't it?

You see simple problems like continuously using this type of technology could lead to serious gaming troubles and never-ending software updates.

Which will really never solve any problem and add more to it during the process.

And in Windows 7 I assume that you experience pretty much like the same problem that I have. (The 'icon' rendering)

I do hope that Intel have learned from its previous mistake (the MMX technologies)

I believe it would be helpful if Intel would completely drop all of its processor and motherboard attached graphics. Therefore leaving all the graphic related processes to the graphics card.

If you could see here in this article in Wikipedia, Instruction Set Extensions SSE4.2 is notably problematic, citing from this article here.

http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions

0 Kudos
idata
Employee
919 Views

Not everyone uses processors for gaming, some people actually have productive applications for them too.

SSE/MMX are very useful for scientific computing, as they allow a program to execute several floating-point calculations simultaneously - drastically speeding up the program. To do that without SSE/MMX would require either doing calculations one-by-one, or sending them to a GPU, calculating, then acquiring the result - which introduces considerable complexity, latency and compatability issues. If we're getting rid of SSE and MMX, why not drop the 8087 instruction subset while we're at it too? In fact, why not just go on ebay and buy an old 80386 pc?

Enter OpenCL, and the on-chip graphics processors become invaluable. While the on-chip graphics processor is nowhere near as powerful as say, a Radeon HD4890 or the new 4Teraflop radeon (for embarrassingly parallel problems), it is considerably more energy efficient and a much cheaper solution. For problems that don't parallelise so widely, the on-chip GPUs are considerably more powerful than modern graphics cards. A cluster of cheap PCs that are just used for office-work during the day can be used to crunch through massive calculations at night, thanks to the on-chip graphics processor. Without it, we'd need to spend extra money on GPUs, and electricity to power them (GPUs are notoriously power hungry, the 2xHD6870 cards in my home PC eat their cost in electricity).

I'm all for the soup of both serial and parallel processing cores that Intel and AMD are moving towards. If you don't like the parallel processing parts, simply disable them, or avoid using software that can utilise them!

-Mark

0 Kudos
idata
Employee
919 Views

As for the SSE "problems" - its not Intel's fault if other hardware companies and software companies don't bother testing their products properly. I'd rather have SSE and have to patch my OS after installing it than not have SSE at all. Should we avoid hard disks larger than 137GB, just because Windows XP originally didn't support them?

0 Kudos
idata
Employee
919 Views

Yes I'm saying that Intel should stop using SSE for graphics and GUI of all operating systems. And also stop integrating it with their processors. I'm sure that it (Intel would run fine without it) seeing the speed and capabilities of i7. SSE ruins very intricate graphics like what I've posted and goes on to intensive online/RPG games.

Its (SSE) not meant for them (Intel) and would never be. And do you know why it takes one night to crunch those numbers? Its because that you rely on that puny integrated graphic chip on your Intel, instead of it doing intensive instruction set and algorithms.

And for the graphics card that you have at home, it is struggling to get those images and frames done properly which has been messed up by Intel in the first set of its decoding and sampling!

Graphics card get so power hungry because it has to double its power consumption in order to cope up with Intel. And I'm sure its also the same with other camp (AMD) because of their 3D Now! technology.

Wake up guys these two should be working together not taking the burden of the other!!

Think about it and make a beta. There are a lot of individuals there waiting to test their knowledge for the improvement of the world. They deserve the credit and I'm sure Nvidia would like to embark on that.

I'm sure this solution would ease the burden of graphic cards of adapting to processors (and being power hungry). But think about it, once the process have been done (separation of graphics to processes), all of us would benefit from the new generation of PCs that it would create. Maybe then the graphics card won't be that power hungry anymore because of its focus on rendering images and frames and no more puzzle to solve from processor.

Imagine all the possibilities and the things that can be done if we can trully harness the power of graphics card that mostly sits there all the time and do nothing.

Face it some technologies are not really meant to be combined into one like processor and graphics card/chip, they just screw up so much, ok.

0 Kudos
idata
Employee
919 Views

As it was stated in the Wikipedia article, SSE's typical applications were on digital signal processing and graphics processing.

Moreover, since we all know that all graphical data comes from the primary hard disk drive (except for BIOS), integrating it on the main processing unit (processor) would create a big deal of complexibility on the system itself when paired with an add on graphics card (whether the port is used or not). This is because the processor would have to choose which end will the graphics come out (which is quite easy since you have to set at least one for default [if single monitor]).

The complexity comes from which when the user requires intensive graphics resolution and quality [RPG, online games and/or video editing]. These processes doesn't only require numerous instruction sets that will be carried out by the processor but as well as instruction sets for the graphical output that is as numerous as the processes since these two essentially runs in tandem.

Surely the processor can handle it all out but what about the graphical processes? These things will be left out as the built in graphics chip can only handle that much (1/10th) of the processor can handle.

0 Kudos
idata
Employee
919 Views

Firstly, if you want to send me abusive messages then do it in the open forum instead of private messages.

Secondly, if you want a basic CPU that has no fancy features, buy an old 386... oh wait, even they support out-of-order execution, branch prediction, FPUs, etc.........

For a 3GHz CPU, with the speed of light assumed to be 3E8 metres per second, an electronic signal can travel a maximum of 10cm in one CPU cycle. If the CPU has to send messages to a GPU everytime it wants to do a calculation, then has to receive a reply, this will incur massive penalties. Even if we ignore latency and bandwidth limitations of the CPU-GPU connection, we're still wasting an extra 1-3 clock cycles just to physically transfer data between the CPU and GPU.

Additionally, GPUs don't operate anywhere near the speed of CPUs, so for the average program which doesn't need to do massive calculations, each calculation will take over three times as long.

Now look at out-of-order execution, hidden registers and other techniques that can be used within a pipelined processor to massively improve execution speed. As soon as a job has to be offloaded to another chip, you can say goodbye to the massive performance gains obtained with modern processor design.

Regardless of what your little Wikipedia paragraph on SSE incites, SSE is widely and successfully used in video games, scientific software and multimedia; the fact that you see SSE as some kind of big evil demonstrates your lack of any real knowledge regarding hardware. If it really was some big evil that destroyed systems, Intel and AMD wouldn't be "wasting" valuable die space by supporting it.

0 Kudos
idata
Employee
919 Views

If you really are an expert, try writing a program to decode typical 10MP JPEG images (a fairly common everyday-user task).

Try it using:

1. asm-optimised raw software-implemented floating-point (essentially lots of integer calculations)

2. 8087 floating point (one of your evil extra instruction sets that's been around longer than Windows 95)

3. SSE

4. A GPU.

Post the results to give us all a laugh

Option (1) will be incredibly slow, even on an i7.... Option (2) on a 3GHz Athlon II X2 will be considerably faster than Option (1) on an i7... Option (3) will be over three times faster than (2) on most desktop processors made in the last 10 years (possibly excuding "mobile" versions that don't implement SSE). Option (4) won't be particularly portable, would require extensive and expensive testing if released as part of a product, and will have marginal (if any) performance gain due to all the extra delays incurred in inter-chip communications... and of course will fail on systems that don't have a compatible GPU.

Of course, this is assuming you're capable of assembly programming which I guess you implied when you messaged me to tell me you really know what you're talking about (presumably you know better than Intel/AMD/IBM/ARM's engineers too then).

As I said regarding the SSE bug: Should we also avoid hard disks over 137GB, just because *certain* Windows XP versions don't support them properly?

0 Kudos
idata
Employee
919 Views

ByTheWay, Should I test it on PassMark Performance Test 7? hehe just kidding

Yeah man I would love to do that, only if I have the funds and support.

The idea of a forum is to openly share ideas to others in order to formulate a new idea that would further enhance existing data/information stored in our minds including understanding/improvement of existing things around us.

Also I would do that in my set of specs. Coz if I do that under your specs, it would be suicide man. That's why I didn't mentioned any of those in expressing my idea. (=

I don't have to restate my problem here, but seeing that simple thing screw up makes me wonder what else is up there screwed up as well. And I'm not saying I'm an expert dude, more like an inventor/renovator sort of thing.

Speed of light, cycles per second latency time yes we all know that.

We don't have to imitate AMD. Were the original.

These are my specs:

  1. redesigned i7 without the built in graphics and the SSE

     

  2. redesigned 4GB 1033GHz GPU with DDR5 memory clock and integrated SSE (since we want to process typical MP4 1 and 1/2 hour of movie in 60Hz HD 1600:900 ratio display, 45 fps with 192kbps 7.1 channel stereo output format since want to put in the subtitles to understand better, also for the sake of money making and enjoyment of other languages) Yay!

     

  3. a board without the built in graphics that fits it all of course without built in graphics.

     

Now here's the basic flowchart:

  1. On power up retrieve bios and MBR, OS continues to load (GUI and all other processes) on system memory,

     

  2. processor/CPU then takes all (GUI and all other processes) and does it job accordingly, but in my case it would be separate the GUI and other graphics/display data from the non graphical ones,

     

  3. once separated CPU/processor deposits all GUI and other graphics/display data to the memory of GPU card having an exclusive 16 two-way lane for the processor (continuous amendment/changing/resoluting of graphics data) (tentative).

     

  4. goes on until the the PC is shut down

     

Forget the limitation incurred/imposed by the SSE on the 137GB hard disk drive running on certain versions of Windows XP since DX11 on Vista/7 is better compared to built on XP.

If most people moved on from 8086 architecture of the CPU then what more it is to some people *like you* if they will move out from Windows XP? What's hindering you? I'm sure there's an application there that is fully compatible with Windows Vista/7.

And besides, Microsoft can certainly patch that bug up.

0 Kudos
idata
Employee
919 Views

You say "We don't have to imitate AMD. Were the original."... While I agree with the second part, most of the "supercomputers" at one of the universities that I work at use AMD processors, Intel processors only appear in the smaller grids - and considering the far superior power consumption of Intel processors, I doubt the "low cost" of AMD chips is the driving force there

As for your specs:

1. Shall we drop the 8087 instruction set while we're at it, since that is like a primitive serial form of SSE?

2. Why have SSE on a graphics card?? It would be faster to execute SSE on the processor, considering all the "speed of light, cycles per second latency time" that you dismissed earlier. It would probably be easier to do the calculations serially on the FPU than to use an external SSE chip, and what happens to the people who don't have graphics cards? Musicians for example, who's software just needs to execute a load of DFTs or DWTs? Bit of a waste forcing them to spend $100+ on graphics cards that would give worse performance than their old SSE-capable processors? As to the GPU specs, you're basically asking for something tailor made to whatever it is you do on your PC... Remember, plenty of other people out there use their PCs for other stuff... And where do you get 45fps video from anyway??

3. So everyone has to buy a graphics card, even the low-end user who just wants to write Word documents and read their email?

As for the flowchart:

1&2. What do you mean "CPU then takes all processes"? What was running them beforehand? Assuming you mean the GPU is to execute the BIOS, initialize system RAM and load an MBR, what initializes the GPU?

3&4. So how does the GPU know what graphics data to load? What about graphics generated on the CPU (e.g. rendered web pages - something that would take forever on a GPU).

As for DX11 - its nice to see DX finally catching up with OpenGL (finally, DX supports tesselation too!), even if it is several years late...

I use Windows XP for Windows stuff, and Linux for everything else. The main thing "hindering" me from using newer Windows is:

a) Cost - Linux is free.

b) Lack of customisability - While I like the robustness of Windows, the shell can be incredibly annoying at times...

c) Windows is nowhere near as versatile - due to the closed-source nature. I depend on Microsoft to supply OS internal features designed specifically for my OS (e.g. AVX support for my i7, which XP will never have), whereas the Linux community provides popularly requested updates even when the main developers don't.

d) Windows is inherently less efficient due to (b) and (c). The fact that Windows Server 2008 runs faster in VMWare Server than when run natively on a system is a testament to that...

I intentionally didn't mention "security" there, as I've never had a virus on XP x64 and Windows 7 hasn't had any kind of security problems that a new Linux release wouldn't. There's nothing I dislike about XP x64 that has since been removed/fixed in newer versions, and nothing new in Vista/7 that I don't already have on Linux. Don't forget that Microsoft had to inflate their Windows Vista sales stats by forcing people to buy Vista/7 licenses in order to use XP, due to the poor uptake of Vista (which never exceeded a 20% market share if I remember correctly).

If your CPU "design" really had any advantage over current designs, don't you think they'd already be rolling it off the fabs by now? On-chip graphics is good, as it saves low-end users from having to get a dedicated graphics card (which would just be an electricity-hog and annoying noise to them). If you don't want the on-chip, then just disable it and use a discrete graphics card instead; the presence of a (BIOS/OS disabled) on-chip GPU won't slow down the external GPU at all. If you don't want SSE, then move over to Linux and you can compile yourself a kernel that won't support SSE at all After all, Microsoft don't provide such a kernel (even in Windows 7) despite the fact that systems without AVX/FMA/SSE/MMX/FPU/Pipelining/Caches are far superior to systems with them (according to you) .

0 Kudos
idata
Employee
919 Views

Yes drop the 8087 architecture and instead build a new one.

Because I've been noticing lately that the is a graphic out chip/unit (aside from built in graphics) built inside the processor. And this chip here certainly does nothing but to dump finished graphics/image/data output on to the graphics card. In short what it does is just to give the graphic card images that it would render and then that's it. No more thing to do, no more data to store or any rendering or anything related to graphics processing to handle for the GPU.

I got a statement that because I have here a GPU meter here from Windows Vista gadgets that says can monitor GPU activity and Memory usage. From the time I used this tool the activity graph always shows a flat line which goes up onto the center of it (50%) and the memory usage (even though I'm using Internet Explorer 9 which claims to be utilizing GPU cards and sometimes it fluctuates a just a little bit from 45% to) always stays the same (2.50MB) usage (although it spikes sometimes whenever I open an application).

Addgadget.com (source, GPU meter)

Yes those things are true but there still seems to be something missing.

How about if we remove L2 and L3 Cache of the processor (leave only L1 8MB for the instruction sets) and take the GPU memory as L2 Cache, literally? It would be like two processors accessing one single cache sharing the same memory. And this cache here should be exclusively only for 'user initiated' processes and tasks which the user either clicked or opened thru mouse or 'enter' key perse. Nothing else more. This would result to ultra-real-time processing and rendering of not only applications but all the more graphics/data/image processes being shown on the computer screen. That would be impressive.

The separate all processes thing would be the time after the OS have finished loading all the system processes and prepares itself for any user initiated activity for it to handle. There the two processors then can work separating the data in real time since they only have one set on data to tackle which is the 'user data in the L2 cache'. And the exclusive 16 two-way lane for the processor-GPU would be for the real time updates in the graphical/image output of data to the monitor.

That is what I want to be heard about (hear me).

0 Kudos
idata
Employee
919 Views

45 fps by the way is true if you're watching high octane action movies. (=

While I was taking a bath this afternoon that got me into thinking, why not use the SLI interface/port for the direct connection of L2 Cache. In that way we don't have to create a new breed of GPU cards. Just reprogram/flash each GPU card model from oldest to newest, use our existing DDR3 system memory slots, build a new type of motherboard with SLI port sticking out of it and a new feat of CPU.

That way we could see each type of card grinds to the video/graphical/image data and choose which one is the best.

The possibilities are endless!!!

Both CPU and GPU cards can then do the following:

  1. Add/remove or even share parallel technologies to eliminate redundancies and errors in rendering graphical data

     

Thus making the need for additional GPU card obsolete and unnecessary.

So good luck guys and see you on the next generation of Computer models.

AMD is so toast for integrating GPU chic onto the CPU BTW.

0 Kudos
idata
Employee
919 Views

45fps sounds silly, I've never heard of a flatscreen monitor supporting anything in between 30 and 59fps.........

As for the L2-SLI idea............. I'm going to assume that you're either about 14 years old, or a troll as that would be a death-sentence for system performance!

0 Kudos
idata
Employee
919 Views

I never mentioned that it was computer generated/with special effects and that we're gonna be using an HD type monitor. And besides we're also gonna be using it for astronomical monitoring and 3D heart imaging even for weather prediction.

Computer graphics (what's displayed in the monitor) is dynamic man. It's ever changing as from the time that we start up our PC, load in the internet explorer type in these words as I think about it as the letters appear continuously on this screen even up to the time your scrolling down your page with your mouse as you click the scroll bar/page down/arrow down/mouse wheel and the like to read this post I've wrote about.

So if its dynamic then the data should be simultaneously two way.

1GB (or maybe even 3GB coz MSI already announced it lately this March, Radeon I think) is enough (but won't be as the times change) for an average PC user to load up his/her applications like:

  1. Internet Explorer with 3 tabs (a Facebook tab, topic he/she's interested about and an email like AOL, Hotmail, Yahoo etc.)

     

  2. Firefox (Safari, Opera you name it) for uncompatible sites he/she might be visiting

     

  3. Windows Media Player

     

  4. MS Word, PowerPoint and/or Excel (for anything you might be working on maybe printing lyrics for your next rehersal, an office worker doing his work off-site [might be even using MS Access, SQL and/or Windows Cloud] and a fashion designer of a Yorker magazine preparing her next presentation)

     

  5. Paint (for editing pictures he/she's takin')

     

  6. Audio and/or video converter (for her smartphone) (plus iTunes for iPod)

     

  7. A virus scan (MS Security Essentials coz probably she noticed that it needs attention)

     

Your just jealous coz you can't think of a way to improve PC performance aside from overclocking (typical/average PC user).

0 Kudos
idata
Employee
919 Views

The key here is to increase bandwidth (lane) from CPU to Graphic Card (maybe 32 2-way lanes will do and I call it the CPU-GPU SLI technology, for the record this will replace the traditional PCI bus that has a fixed clock speed and a limited bus lanes 4, 6, or 8 i think) vis-a-vis. This is a private and exclusive access to both components for if it's not, we'll be having one heck of a trouble here since 'rouge' data from the internet can wreck your system. Work balance means heat balance as well as this will lessen the trips required to the hard disk / solid state drives.

On computer start-up, the system would not only assign address to the main memory but also with the graphics card making it the L2 cache).

On Windows 8 or Windows Server 2012, its cache (list and compilation of recently / mostly used applications / data) should and not ought to be loaded into the graphics card memory but stays in the main memory. It should only load background services i.e. security suite software and running applications.

BTW I like the i7 3rd gen extreme desktop processor this company has made.

0 Kudos
idata
Employee
919 Views

Another thing that I've noticed while browsing through my pc is that the graphics card adapter/VGA is connected to the IRQ. This connection limits the overall performance/output frames of the average pc since an IRQ basically hold several other devices and that the VGA/GCA has to wait for its turn for it to process its information. Some instances even shares this IRQ with your built in graphics chip and the user has to disable either one if it to get a display working which is just plain silly.

If you're talking about my frame rates then maybe you haven't had played any online games yet like Crossfire which clocks at 40 to 60 fps.

I'd say that we should connect the VGA directly directly to the CPU like allocate several pin-outs directly to it and treat it as an nth part of the CPU (hex-core plus one).

And that we should include memory count of the VGA to the BIOS POST test and map all as one contigous memory.

0 Kudos
Reply