I2C is not a wired network standard! It's designed to be used between chips on the same PCB, typically with the same power supply rail (although level shifters are available) and the same ground reference plane. Yes, ground planes matter for best performance.
Breakout boards are meant for a quick test. Then you're supposed to design a PCB with the devices you need all in one place. SparkFun spaghetti will only end in tears.
The other thing that sucks about I2C is that you never know what hard-coded addresses you will be limited to, until you've selected all your parts, dug into the datasheets and then discovered the inevitable collisions.
In 20 years of PCB design I think the largest number of I2C devices I've had on one bus is maybe 30? And that was pushing it.
What sucks the most about I2C for me is the fact that the bus can get stuck if the master device resets during a transmission. In which case the slave device will keep the SDA line busy while waiting indefinitely for clock pulses. Yes, you can probably send dummy clock pulses to finish the frame and get SDA released, but that's just a pain to code properly.
SPI is so much more reliable with just two extra pins, I usually don't even consider I2C anymore.
Yeah that's pretty disappointing. Gotta implement a "bus clear" feature in any system that relies on I2C working consistently.
Slave devices that rely on clock stretching are a total pain in the ass, as well. Tend not to play nice with other normal devices.
https://www.i2c-bus.org/clock-stretching/
The main thing I use I2C for in recent designs is multiple channels of power monitoring, e.g. using the INA226. I agree that SPI is preferable for sensors that support it.
And I'm currently working with the INA229, its SPI cousin I guess :) Those chips are really nice, I'm trying to use the alert feature as a configurable overcurrent trip protection, as well as overvoltage device protection.
SPI requires 3 wires plus a Chip Select for each device on the bus. That 30 device network under discussion suddenly needs 33 pins. That's more pins than any processor I've used in the last couple of years.
I mean I guess we could use an i2c GPIO expander to get extra lines. I bet that would be "fun" to get right.
Many MCUs also don't handle SPI well (via DMA) or have bugs in peripherals. So the only viable way is bit banging and that is difficult to make faster than I2C.
I am considering implementing SPI on FPGA to connect with devices and then transmit to/from MCU using "SPI" dialect that works with that MCU.
What kind of issues have you found with SPI peripherals? I've rarely found issues with SPI, certainly it's a peripheral which is far less buggy on average than I2C, where there are some absolutely awful implementations.
You can use a demultiplexer to turn e.g. 4 pins into 16 chip select. But the fact that the number of wires has to grow at all to accomodate additional devices is certainly a big downside compared to I2C, and for wired networks that basically makes it impossible to daisy chain devices.
If you had 30 SPI devices, you'd probably use a small FPGA to address them all (produce all the chip selects as needed). Basically, write an integer (e.g. 5 bits would do for 30 selects) and a strobe and you're good to go. No address conflicts, ever. And very fast.
This can be detected and usually just mux scl to a gpio and cycle until the bus is recovered. The real issue is if a device decides to hold scl low forever. At which point it's recommended to power cycle devices on the bus until it recovers.
SPI is however not useable as a bus. You need a specific chip select pin line for every chip you communicate with. But yes I like SPI much better as well.
Those little breakout boards are such a liberation for us oldsters who can't handle the tiny little surface mount components. I've gotten stuff well into development with an ExpressPCB loaded with breakout boards, and the engineers can turn it into a real board with real code in a jiffy while the prototype is being used for testing.
Oh, don't get me wrong, the Sparkfun and Adafruit breakout boards are indeed a boon for prototyping. It's just that many hobbyists (myself included) get as far as the prototype working as intended, and call it a day. And with "Sparkfun spaghetti" and "Adafruit anarchy" I now have phrases to describe what the inside of those prototypes often look like.
You could of course design your own multiplexer with more bytes of multiplexer address space and extend the concept indefinitely. I wouldn't recommend it, but it's possible.
64*127 = 8128, normal i2c is limited to 7 bit addresses. There are extended i2c addressing setups but I believe that requires every chip involved to support them.
The third thing that sucks about I2C is a little thing called Clock Stretching which can completely blow up your latency when reading sensors. The last company I worked for had to design a sensor out because it would stretch clocks unpredictably for several milliseconds, which was enough to blow our latency requirement to maintain real-time operation.
This is a nice breakdown, however it is leaving out one interesting piece: Active slew rate controllers.
In some cases where you need to exceed the typically allowed bus capacitance either due to a high number of attached devices, or over a long cable run (it happens...it sucks, but it happens), you can use a part like the LTC4311 which, rather than using resistors to passively pull the bus lines to a resting high state, detects the direction changes and actively assists in pulling the lines to their intended states.
I wish software packages had data sheets! It would be great to have a concise/standardized format with bullet points and a "typical applications" section on github landing pages.
Analog Devices and Linear Technology (now owned by Analog Devices) have the cleanest looking datasheets. Texas Instruments is ok but not great. Standard Microcircuit Drawings, the kind of government datasheets used for milspec parts, are absolutely horrendous and should be avoided if the manufacturer has a normal looking datasheet to look at.
Linears are great, I find the AD sheets hard to read sometimes. Microchip historically has some decent ones but it’s been a while since I’ve used their parts.
Datasheets for Japanese connectors are like a circle of hell for me. Confusing and possibly incomplete dimensions, and the drawing usually looks like it was printed, scanned, and converted to jpg several times.
Ugh, I've been using a datasheet from Panasonic [0] recently, and it's been a trip. The original Japanese is all there, with a lackluster English translation below each paragraph. Plugging the Japanese in to Google Translate for the particularly bad sections usually helps. At least this version is clean, I ran across a few PDFs floating around for this part that looked like they had been run though the print-scan-jpeg cycle a few times.
Eait until you see Chinese market only parts without datasheets in English, and thenselves mostly done by not so bright engineers of sales offices of Western big semis.
TI's technical reference manuals for their MCUs are some of the best I've used, though. That's not saying much. But compared to Marvell or NXP/Freescale they're really good.
Cypress PSOC chips have a bunch of software modules you can load into them to implement various functionality, and each piece of such software includes a datasheet, just as if it was a standalone chip. It's glorious.
In "Object-Oriented programming an evolutionary approach", Brad J. Cox (the inventor of Objective-C) hoped for the birth of "software-ICs", software components that should have been widely reusable, and documented with the equivalent of the datasheets used for traditional ICs. This was his vision for OOP.
IC datasheets are dominated by the mundane but critical things like timing diagrams and electrical tolerances. Software components can't even agree on which "ICs" fit into which breadboards.
The problem I have is the sense "you're only allowed to ask for progress or give direction if you code it".
Maybe you're eg a UI/UX expert, you can help plan the direction for a project, feedback the changes needed and why, help make the project a success but just aren't competent enough to code those changes. Sure, no-one owes you their FOSS work, but it seems like we lose something if the only input allowed is from people able to do the coding.
Of course you might also just be a user. If you pay it forward, do you get to make a feature request?? If I don't code it myself ... maybe I should stop being a user if I can't contribute code?
Demanding work for free, and offering suggestions for improvements can be seen as synonymous, but they can actually be vastly different.
Projects I use heavily, like Ubuntu, I try to make myself useful offering advice on forums. That's not putting "money" in the bank of any coders but seems like it's in the spirit of FOSS - contributing what we can to create a better system.
Read your comment after a whole day of desperately troubleshooting an i2c bus on (way too) long wires. I already held an mcp2515 board in my hands thinking about how to retrofit every device with CAN, knowing this would take weeks until robustly running. Maybe adding LTC4311's will keep the system running until I truly have time for the conversion.
Thanks a lot for your comment! (Just created an account to write this!)
That's right. A current source (active) instead of a 'poor man's current source' (a resistor) can help fix problems. I had to run 400 kHz i2c over a few meters, and it really helps to drive the cables properly.
It's very easy to remove the pull up resistors from most break out boards.
The most likely issue you will encounter with trying to connect many I2C devices is address collisions where devices have fixed addresses or at least some fixed bits in the address. This will probably limit you to about twenty or thirty useful devices on a bus.
How is the allocation of addresses done typically? Are they always hardcoded? Is the same chip then available with different addresses (different SKU)?
> Is the same chip then available with different addresses
This is sometimes done, though more commonly in my experience is there are some input pins which the chip uses to determine the actual address.
This might be as simple as one to three inputs which should be connected to ground or supply voltage to encode the last bits of the address.
Other chips are a bit more creative, and allows you to, in addition to ground and supply, tie the address pins to the I2C clock and data pins. This allows it to generate four combinations from per address pin.
People have already touched on setting the addresses using the pins but sometimes you can just buy near identical I2C parts that have slightly different part numbers because the range of user-selectable addresses is different.
Typically addresses are only 7 bit (10 bit are possible but not all devices support that).
So you either have way to set the address using resistors / address lines or you have some config more where you set the configuration for the sensors one by one (Cypress touch sensors had this)
most common practice I've seen at least with typical arduino/rpi sensors is fixed address for the same chip, sometimes you can change one or two bits so you can have two/four of the same kind in the same bus, sometimes you have address conflicts with similar devices from the same manufacturer (e.g. bosch bme680, bme280 and bmp180).
> " ...and I'm sorry to say, the answer is way lower than 127"
127 is a really high number that nobody should encounter in real design situations. I2c was conceived for communications between chips on the same board; it is extremely unlikely that one could fit that many chips on a board, all talking i2c with each other on the same bus. I2c switch chips also can be used to split the bus in multiple parts so that only a few chips at a time are physically connected to the bus. Example: the PCA9548 from TI, NXP and possibly others.
Exactly. "Edge" cases. The arrogance of assumptions about unknown requirements and deployments without asking enough users/customers. Sometimes, there are obscure features and software but damn important (i.e., safety, banking, finance, government, military, healthcare, etc.), which is why apparent popularity is a nearly meaningless when deployment could be unknowingly ubiquitous.
---
"No one will ever need that code, so let's remove it."
"Aaaah, what did you do!? My SCADA products for a nuclear power plant no longer work!"
This seems to only get into the electrical issues associated with connecting lots of devices to an I2C bus. Due to deficiencies in the protocol specifications and real-world implementation issues, there are logical problems as well. As a consequence, it is near impossible to make it 100% reliable.
If you're interested in the unpleasant details, whitequark has posted about this at length on Twitter.
eeyup, I2C is probably my least favorite protocol at this level. It's an absolute minefield of problems (often subtle and hard to reproduce), and I go out of my way to avoid it if at all possible.
Hard to find any supporters of i2c these days. To be 100% compatible, sda and scl must be bi-directional. Complicates bus buffers; also complicates a bit-banging implementation. In the olden days, this was fairly easy by having an input on each pin, with the pull-down transistor per pin as shown in the article.
The main drag about SPI is this Phase and Polarity. Being opinionated (!) I wish the SPI were were all CPHA=0 and CPOL=0 (bits clock in on the positive-going edge).
Always hated breakout board producers for this pull-up resistor thing, especially since they are often targeted at the educational market. Why would you solder fixed pull-ups guaranteed to work only when it's just you in the bus (and not even that as it's pretty common for uCs and SoCs to include their own pull ups)? Teach your users the simple calculation needed to add pull-ups themselves!
It's way easier to remove a pull-up resistor that's present and undesired than to add one that you don't have in your parts stash. And there's way more beginners/hobbyists who won't have one and won't understand why their board doesn't work (and think your board sucks) than people who will have their circuits not work for unknown reasons when they stick multiples together.
By the time you're doing that, you're probably reading someone's tutorial on how they did it and monkey-see, monkey-do, or you're far enough along in your learning journey to know to desolder some of the pull-ups.
Arduino "won" by making things dead-easy to get working for people who frankly didn't have a clue what they were doing. This is a good thing, IMO. If it works straight away, you learn something and are inclined to take the second step. If the first attempt starts with a theoretically better lesson on how to calculate, select, order, and solder pull up resistors, you're failing at user engagement and unboxing experience by more than than you're gaining in theory-learning.
Partly agree, but it's not like you don't have to use resistors anywhere else with Arduino and the likes.
Just to light up a LED you will need some and usually there are plenty included in starter kits. At least where I live there still are a couple of electronics retail shops where you can buy spare components.
Also arguing that desoldering smd resistors is easier than finding a through hole one and plugging it into a breadboard or soldering into a prototype board is a bit of a stretch.
I have an "I2C" (no clock wire necessary, but it's present) bus with 512 WS2811 devices split evenly across a 1:4 fan-out power and signal integrity multiplexer. This is because I couldn't get 384 devices to maintain signal integrity (only about 340), likely due to too much load on the bus or reflections. 256 devices worked fine.
Clocked buses, especially large parallel clocked buses, are generally a bad idea due to clock skew (clock and data signals getting farther out-of-sync with different wire length and loads.)
WS2811 chips are not "I2C" - they're daisy-chained, active driven (no pullups) and regenerate the signal (and re-edit it) as they go
My experience is that chip to chip signal integrity for them is not an issue - however power/ground noise is, especially if you try and pass the power over the string to the far end
I think the thing I've seen that's the hardest to detect is reflections on the falling edge of scl double clocking. It's been a long time since I've seen it maybe more modern controllers slew rate limit the falling edge. But definitely the i2c device at the end of a long cable is not a great idea.
If you are getting reflections at 400Kbps, you either have another problem or are way out of the standard.
There’s a reason for things like RS422/485.
The hardest bug I’ve seen on an I2C bus was an implementation that every now and then dropped the lines to GND for a small period (~50ms). All I2C devices were working correctly, but because of supply chain reasons we had to introduce a PN change that was on paper 100% compatible.
The new PN was a SMBus device, and guess how SMBus signals a reset...
I know, but clock rate tends to be a good proxy, especially im cases like this.
For a bus like I2C there’s basically two possibilities:
1. You are using a part with a clock rate that extends clock rate over the standard, which means you have very small pull ups on the lines because that is what the datasheet recommends for that speed (thus your rise time gets too short).
2. (And this one is way more likely) you are using I2C to communicate over a long cable.
That's too bad, the Arduino was probably the cheaper part. I have a technique which I use when bringing up new circuits and breakout boards for the first time, which is to wire resistors in series with all of the i/o pins. This will usually be good enough to limit current flowing through output transistors and protection diodes, so each device can tolerate minor mistakes in wiring or coding, 5 versus 3.3 V disagreements, or an errant scope probe touching the wrong thing. You can use rules of thumb for RC time constants to work out resistor values, but typically 1k for output lines and 10k for input lines have served me well as starting points.
Gatekeeping comments like that are not helpful. There are many beginner-friendly tutorials for 8-bit AVR chips, and many projects don't need the extra features of a 32-bit ARM chip.
It's a good point and I don't consider it gatekeeping. The newer devices really do cost less and if you're staying within the Arduino ecosystem, the tools will work with (usually) no code changes, especially if you're doing beginner stuff, so the faster chips are just as easy to use, while the increased RAM/Flash resources mean you can use languages like Python or, gasp, templating.
Forget the "extra features." They're just along for the ride. Silicon is cheap!
8bit controllers like AVR a lot more accessible if you are going with no arduino approach of just datasheet+compiler. All the peripherals are described in single reasonable size datasheet and you can don't need much hardware initialization code except for features that you directly use. Bringing up an ARM chip from scratch without using a generator like STM CubeMX or a frameworks like Arduino with existing hardware support packages will take a lot more effort.
Yes, if you're doing that. Which is why I said "if you're staying within the Arduino ecosystem"
I've been programming embedded systems for-fricking-ever and even I stay with Arduino libraries if I can get away with it for non-critical projects simply because it's easier.
Are those really tutorials for AVR chips or tutorials for the Arduino ecosystem?
Arduino is not limited to AVR, there is no need to put up with the limitations of those aging chips, unless you are accessing hardware registers directly you can probably run the same software on a STM32 or SAM D board.
I was specifically thinking of the ATtiny85 when I made my comment. A beginner's first electronics project is often blinking an LED on and off. When I'm teaching a room full of kids electronics for the first time, the simplicity and low-cost nature of the ATtiny chips are hard to beat. I get it, ARM chips are great for real projects, but there is still a place for 8-bit AVR.
I've literally just received a delivery of through-hole ATTiny85s and a programmer, so I can dead-bug an I2C interface onto an analogue sensor. I have no idea where I'd start looking for an ARM in anything like the same form factor.
I've just had a quick look, and there are some very small BGA-packaged STM32 chips about. With a competent PCB fab you might be able to knock together a board with 8 jumper pins that wouldn't be much bigger in total than a SOIP attiny, but that's another job plus whatever peripheral components need to be added. Might be a fun design job, though.
Oof, yeah. Have spent so much time trying to get multiple devices to work reliably on the same bus. As more devices are added rise and fall time of the data line also becomes a thing. It can be really hard to get things working reliably and handle edge cases. Have also found different host MCUs are able to handle signal errors varyingly well. On some chips I’ve worked with the I2C peripheral won’t recover from an error on the line until the MCU is completely powered down and back up. This poses its challenges for overall system reliability.
All this is to say I’m always impressed that SparkFun has managed to build a product line around plug and play I2C with their QWIIC stuff. I’ve never used it though, so not sure how well it holds up with multiple devices.
QWIIC is pretty nice to use from the wiring perspective. Not just for the plug and play nature but also the fine gauge of the wire in the cables. It makes it a lot easier to create little one off devices without going the printed circuit board route.
And with Adafruit also adopting the standard there are more options for devices.
Good article and explanation on the real-world issues that can be encountered with I2C devices. I wish the author had gone into more detail on how to solve those issues by using discrete sensors instead of breakout boards.
Those breakout boards are just for experimentation / prototyping.
You typically design a proper PCB for the final project and of course you can also buy individual sensors.
I took my first stab at designing and having my own PCB manufactured for a mechanical clock I'm trying to build. At the moment I'm currently stuck trying to debug the 7 x MCP23017 devices on my I2C bus. I picked up an oscilliscope and added a level converted between the RPi and my PCB, but I feel pretty lost as I'm a complete novice and have no formal background in electronics. This article provides a lot of useful information for someone like me! Online all you usually get are single MCP23017 implementations, nothing too complex.
This is similar to the very similar HPIB/GPIB/IEEE488.
In theory you can have 32 addresses but in reality because of electrical limitations (even defined in the IEEE488 spec) you can only have about 24 devices.
The pull-up resistor allows I2C to have multiple masters on the same bus. A master take control of the bus by pulling either sck or sda low (can't remember which) and if other masters detect a low voltage when it expects a high it knows the bus is currently taken. SPI only has one active master on the bus at a time, so there's no need for pullup resistors to implement this arbitration mechanism. SPI masters just drive their bus lines using push-pull outputs.
It's even cooler than that: it has collision avoidance.
To start a transfer you drive the clock low. If both masters drive the clock low at the same time. Then they start driving SDA with the address. If at any point in time, the address you are trying to put in the bus is not there, you stop. This could happen if you try to drive a 1 and the other master does a 0 (since it's open collector, the 0 will win), and you know you couldn't put your '1' in the bus, so you shut up and let the other master carry on.
Typically, no. In SPI all lines are actively driven high or low rather than an open drain configuration. The bus master drives SCK (SPI clock), MOSI (master out, slave in) and CS (chip select), and the slave device(s) drive the MISO (master in, slave out) line(s). From an operating perspective, pullup/pulldown resistors are not required.
Now having said that, in some cases it's considered appropriate to add weak pullup/pulldown resistors to the data lines to ensure that they're in the expected state on power up and to prevent glitches from putting slave devices into weird states.
Three wire SPI is pretty common where everyone's MOSI and MISO are connected together open drain style, and the master clocks out 1s to let the slave pull the data line down for read operations. You have clock out something from the master during 4 wire SPI read anyway so it doesn't change a lot most of the time.
It's worth noting that I2C dates back a long ways, when we were still doing a lot of things like TTL, and weird asymmetric termination circuits were the norm. And it was under patent for a long time, so its development was stunted, and a lot of people used I2C peripherals by bit-banging GPIO pins with primitive code.
Virtually all SPI devices that I know of are CMOS with push-pull outputs and high-impedance inputs, which are much more friendly to work with. Until you try to drive the bus with two devices by accident. See my other comment about series resistors. I have a circuit right now that's operating a SPI bus with a 100 MHz clock on a small two-layer board and I'm getting away with it. A real engineer wouldn't allow this, but it's strictly for a research project.
I2C is electrically far more fragile than CAN. CAN is robust against all sorts of electrical faults. While you might not be able to transmit data, you won't fry the controller, and once the fault is removed it comes back.
I2C is often more tolerant of unexpected traffic on the bus. There are far too many ECUs that will put a vehicle into limp mode (limit speed drastically) if they detect unexpected CAN traffic, even traffic not directed to them. Protocol-wise I2C is significantly simpler to get right.
To add to this, I2C is more common between a controller and 'slaves' which are architecturally simpler, for instance single output sensors often have I2C as their comms interface.
CAN is really meant for distributed embedded systems where you are having to implement full blown communication between equally complex nodes.
CAN is indeed much more robust both at a physical layer with a differential line and at a transfer layer with all sorts of error detection and mitigation mechanisms.
I see, thanks. I'm asking mostly from a model airplane perspective, where you have a receiver, GPS, mag, transmitter, etc. Right now they use UART, with only the mag using I2C, but CAN sounds like a better solution, especially given the amount of electrical noise you get from the motors/servos.
The closest comparable protocol IMO would be SPI-- which generally requires more wiring and more "slave management" (addressing a certain slave device is typically done via dedicated chip-select connection, while sharing data lines).
Potential data rates are significantly higher (>1MBit/s)
But SPI is even less specified, and you will sometimes encounter slaves with diverging/exotic requirements (regarding chip select/clock timing behavior, actual protocol structure on bit level, etc.). This is also typically full duplex unlike both others.
I2C is highly suitable as an interface to a bunch of registers, while being still very simple (especially wiring/addressing) and easy to get to work. Slave devices are also typically more consistent in behavior/requirements than with SPI.
CAN is by far the most complex of these; it is not intended for connecting components on a single board, but instead to connect discrete components. It typically needs transceivers (separate from your controller), and has typically the most rigid timing requirements for participants. Multiple bitrates are possible, and the protocol already comes with mechanisms for clock synchronization, CRC checking, message retransmission on error and basic protocol structure (11/29bitID + up to 8 data bytes per message).
Actual useful bandwith is often the worst with this protocol, also because overhead is very significant (think ~150bits on the bus for every message containing 8byte data+29bit ID)
Max Bus baudrate is typically 1Mbit but 250K is most common, while higher rate extensions exist (CAN-FD).
Never used PJON before, but AFAICT its more of a software stack on top of existing protocols (while also defining a custom physical layer). This one is MUCH less pervasive than the other three from my experience.
I would say that while CAN is much more complex as a protocol, it is far simpler from the point of view of the software interface: CAN works by sending and receiving atomic messages, and usually there's a straightforward messagebox protocol with the control which can be used. I2C on the other hand required a very complex (and timing-sensitive) interface between the software and the peripheral, due many details of the protocol being underspecified and varying from device to device. SPI is far simpler than the other two, even though there are many variations (usually just a matter of tweaking some bits in the peripheral).
I can see your point, but I disagree with the "CAN software interface is typically simpler"-- in my experience, there is often a significantly larger API surface with CAN than with I2C on microcontrollers, and you are forced to deal with it because otherwise NOTHING works; examples are bit timing configuration, parameters for behavior when messages are not acknowledged, picking the right frame format, etc.
While with I2C, chances are much better that your microcontroller manufacturer just provides you with a HAL_I2C_Mem_Write function, and you just use defaults everywhere, without understanding or caring, and things just work™.
But all this probably depends a lot on specific platform...
I've had the opposite experience: each I2C device has its own way of using the constructs of the protocol, and you need to customise your code for each one, even though they all generally basically implement the same concepts. e.g. how do you address the register (after you've addressed the device), how do you read vs write, how do you read multiple bytes, empty a FIFO, what does an ACK or NAK mean? And I2c means you need the software to decide this on every. single. byte. And woe betide you if the software chose wrong or takes too long to decide, because then the bus is wedged and nothing will happen. And that's before you get into buggy implementations, either in the micro (STM32s have had multiple different buggly I2C peripherals), or on the device. I've had multiple hair-pullingly bad bugs in I2C where it mostly worked fine until some edge case or another caused the whole thing to lock up unpredictably, causing bugs which ranged from subtle to hardware destroying. It's absolutely the furthest from 'just work' I've seen at this level.
On the other hand, CAN is mildly more difficult to configure initially (usually you need to understand the message box protocol if you want maximum efficiency), but it's then pretty damn bulletproof, and far less actual code to worry about.
I think at worst it is a wash. Having used various device on both, CAN is more regular. The complexity you attribute to CAN is there with I2C just poorly documented, devices resetting for no reason, etc. I think CAN is preferable, and you can even use it on short runs without the physical interfaces.
That's helpful, thank you. CAN does sound much more heavy-weight, though I guess it might be a good fit when you have multiple MCUs (not just sensors to connect to one MCU).
CAN is intended for real-time applications, i.e. with bounded latency. Its also intended for use in noisy environments, hence the differential signalling.
Both these characteristics are required for automotive applications.
I2C is relatively susceptible to noise in comparison to CAN. I2C is also very simple to interface with from both a h/w and s/w perspective. CAN, having timing requirements requires more thought on the s/w side.
UARTs are also simple to use and can be used in multi-drop protocols for using multiple devices and can also give good range and noise rejection in some variants (i.e. RS485, RS422, etc).
SPI is for on-board, relatively high-speed use with a handful of devices (maximum) but offers limited noise rejection.
There are also other simple protocols (e.g. one-wire) but they're less popular these days.
PJON on the other hand is pretty much unknown and is a very much higher level protocol that isn't really comparable to CAN-bus.
PJON is more an abstraction over a bunch of different protocols.
CAN is for off board comms.
I2C should really never leave the board.
I've actually seen a blend though, where someone hooked a couple CAN transceivers to bridge an I2C bus off board (a simple LED board had to be 3 meters away from the micro controller). At the end of the day they should have just put a micro controller on the other end and ran CAN over it, as they still had a bunch of glitches anyway.
Breakout boards are meant for a quick test. Then you're supposed to design a PCB with the devices you need all in one place. SparkFun spaghetti will only end in tears.
The other thing that sucks about I2C is that you never know what hard-coded addresses you will be limited to, until you've selected all your parts, dug into the datasheets and then discovered the inevitable collisions.
In 20 years of PCB design I think the largest number of I2C devices I've had on one bus is maybe 30? And that was pushing it.